NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
Modeling and Reduction With Applications to Semiconductor Processing
1999-01-01
smoothies ,” as they kept my energy level high without resorting to coffee (the beverage of choice, it seems, for graduate students). My advisor gave me all...with POC data, and balancing approach. . . . . . . . . . . . . . . . 312 xii LIST OF FIGURES 1.1 General state-space model reduction methodology ...reduction problem, then, is one of finding a systematic methodology within a given mathematical framework to produce an efficient or optimal trade-off of
NASA Technical Reports Server (NTRS)
Kumasaka, Henry A.; Martinez, Michael M.; Weir, Donald S.
1996-01-01
This report describes the methodology for assessing the impact of component noise reduction on total airplane system noise. The methodology is intended to be applied to the results of individual study elements of the NASA-Advanced Subsonic Technology (AST) Noise Reduction Program, which will address the development of noise reduction concepts for specific components. Program progress will be assessed in terms of noise reduction achieved, relative to baseline levels representative of 1992 technology airplane/engine design and performance. In this report, the 1992 technology reference levels are defined for assessment models based on four airplane sizes - an average business jet and three commercial transports: a small twin, a medium sized twin, and a large quad. Study results indicate that component changes defined as program final goals for nacelle treatment and engine/airframe source noise reduction would achieve from 6-7 EPNdB reduction of total airplane noise at FAR 36 Stage 3 noise certification conditions for all of the airplane noise assessment models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Nagarajan, Adarsh; Baggu, Murali
This paper evaluated the impact of smart inverter Volt-VAR function on voltage reduction energy saving and power quality in electric power distribution systems. A methodology to implement the voltage reduction optimization was developed by controlling the substation LTC and capacitor banks, and having smart inverters participate through their autonomous Volt-VAR control. In addition, a power quality scoring methodology was proposed and utilized to quantify the effect on power distribution system power quality. All of these methodologies were applied to a utility distribution system model to evaluate the voltage reduction energy saving and power quality under various PV penetrations and smartmore » inverter densities.« less
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Verification and validation (V&V) is a highly challenging undertaking for SLS structural dynamics models due to the magnitude and complexity of SLS subassemblies and subassemblies. Responses to challenges associated with V&V of Space Launch System (SLS) structural dynamics models are presented in Volume I of this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA). (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976). (3) Mode Consolidation (MC). Finally, (4) Experimental Mode Verification (EMV). This document contains the appendices to Volume I.
1990-12-01
process; (4) the degree efbudgetary responsiveness in DOD/DON cutback budgeting to criteria developed from two theoretical models of fical reduction... developed from two theoretical models of fiscal reduction methodology. |V ..... A A,’ 4 . 0 f .; . . Dis Apm a al@r Di3t I peala iii, TARLK Or COUTENS...accompanying reshaping of U.S. forces include a continuation of the positive developments in Eastern Europe and the Soviet Union, completion of
Braubach, Matthias; Tobollik, Myriam; Mudu, Pierpaolo; Hiscock, Rosemary; Chapizanis, Dimitris; Sarigiannis, Denis A.; Keuken, Menno; Perez, Laura; Martuzzi, Marco
2015-01-01
Well-being impact assessments of urban interventions are a difficult challenge, as there is no agreed methodology and scarce evidence on the relationship between environmental conditions and well-being. The European Union (EU) project “Urban Reduction of Greenhouse Gas Emissions in China and Europe” (URGENCHE) explored a methodological approach to assess traffic noise-related well-being impacts of transport interventions in three European cities (Basel, Rotterdam and Thessaloniki) linking modeled traffic noise reduction effects with survey data indicating noise-well-being associations. Local noise models showed a reduction of high traffic noise levels in all cities as a result of different urban interventions. Survey data indicated that perception of high noise levels was associated with lower probability of well-being. Connecting the local noise exposure profiles with the noise-well-being associations suggests that the urban transport interventions may have a marginal but positive effect on population well-being. This paper also provides insight into the methodological challenges of well-being assessments and highlights the range of limitations arising from the current lack of reliable evidence on environmental conditions and well-being. Due to these limitations, the results should be interpreted with caution. PMID:26016437
Braubach, Matthias; Tobollik, Myriam; Mudu, Pierpaolo; Hiscock, Rosemary; Chapizanis, Dimitris; Sarigiannis, Denis A; Keuken, Menno; Perez, Laura; Martuzzi, Marco
2015-05-26
Well-being impact assessments of urban interventions are a difficult challenge, as there is no agreed methodology and scarce evidence on the relationship between environmental conditions and well-being. The European Union (EU) project "Urban Reduction of Greenhouse Gas Emissions in China and Europe" (URGENCHE) explored a methodological approach to assess traffic noise-related well-being impacts of transport interventions in three European cities (Basel, Rotterdam and Thessaloniki) linking modeled traffic noise reduction effects with survey data indicating noise-well-being associations. Local noise models showed a reduction of high traffic noise levels in all cities as a result of different urban interventions. Survey data indicated that perception of high noise levels was associated with lower probability of well-being. Connecting the local noise exposure profiles with the noise-well-being associations suggests that the urban transport interventions may have a marginal but positive effect on population well-being. This paper also provides insight into the methodological challenges of well-being assessments and highlights the range of limitations arising from the current lack of reliable evidence on environmental conditions and well-being. Due to these limitations, the results should be interpreted with caution.
Discrete and continuous dynamics modeling of a mass moving on a flexible structure
NASA Technical Reports Server (NTRS)
Herman, Deborah Ann
1992-01-01
A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.
Environmental public health protection requires a good understanding of types and locations of pollutant emissions of health concern and their relationship to environmental public health indicators. Therefore, it is necessary to develop the methodologies, data sources, and tools...
Multi-criteria analysis for PM10 planning
NASA Astrophysics Data System (ADS)
Pisoni, Enrico; Carnevale, Claudio; Volta, Marialuisa
To implement sound air quality policies, Regulatory Agencies require tools to evaluate outcomes and costs associated to different emission reduction strategies. These tools are even more useful when considering atmospheric PM10 concentrations due to the complex nonlinear processes that affect production and accumulation of the secondary fraction of this pollutant. The approaches presented in the literature (Integrated Assessment Modeling) are mainly cost-benefit and cost-effective analysis. In this work, the formulation of a multi-objective problem to control particulate matter is proposed. The methodology defines: (a) the control objectives (the air quality indicator and the emission reduction cost functions); (b) the decision variables (precursor emission reductions); (c) the problem constraints (maximum feasible technology reductions). The cause-effect relations between air quality indicators and decision variables are identified tuning nonlinear source-receptor models. The multi-objective problem solution provides to the decision maker a set of not-dominated scenarios representing the efficient trade-off between the air quality benefit and the internal costs (emission reduction technology costs). The methodology has been implemented for Northern Italy, often affected by high long-term exposure to PM10. The source-receptor models used in the multi-objective analysis are identified processing long-term simulations of GAMES multiphase modeling system, performed in the framework of CAFE-Citydelta project.
HIV RISK REDUCTION INTERVENTIONS AMONG SUBSTANCE-ABUSING REPRODUCTIVE-AGE WOMEN: A SYSTEMATIC REVIEW
Weissman, Jessica; Kanamori, Mariano; Dévieux, Jessy G.; Trepka, Mary Jo; De La Rosa, Mario
2017-01-01
HIV/AIDS is one of the leading causes of death among reproductive-age women throughout the world, and substance abuse plays a major role in HIV infection. We conducted a systematic review, in accordance with the 2015 Preferred Items for Reporting Systematic Reviews and Meta-analysis tool, to assess HIV risk-reduction intervention studies among reproductive-age women who abuse substances. We initially identified 6,506 articles during our search and, after screening titles and abstracts, examining articles in greater detail, and finally excluding those rated methodologically weak, a total of 10 studies were included in this review. Studies that incorporated behavioral skills training into the intervention and were based on theoretical model(s) were the most effective in general at decreasing sex and drug risk behaviors. Additional HIV risk-reduction intervention research with improved methodological designs is warranted to determine the most efficacious HIV risk-reduction intervention for reproductive-age women who abuse substances. PMID:28467160
Modelling of aflatoxin G1 reduction by kefir grain using response surface methodology.
Ansari, Farzaneh; Khodaiyan, Faramarz; Rezaei, Karamatollah; Rahmani, Anosheh
2015-01-01
Aflatoxin G1 (AFG1) is one of the main toxic contaminants in pistachio nuts and causes potential health hazards. Hence, AFG1 reduction is one of the main concerns in food safety. Kefir-grains contain symbiotic association of microorganisms well known for their aflatoxin decontamination effects. In this study, a central composite design (CCD) using response surface methodology (RSM) was applied to develop a model in order to predict AFG1 reduction in pistachio nuts by kefir-grain (already heated at 70 and 110°C). The independent variables were: toxin concentration (X1: 5, 10, 15, 20 and 25 ng/g), kefir-grain level (X2: 5, 10, 20, 10 and 25%), contact time (X3: 0, 2, 4, 6 and 8 h), and incubation temperature (X4: 20, 30, 40, 50 and 60°C). There was a significant reduction in AFG1 (p < 0.05) when pre-heat-treated kefir-grain used. The variables including X1, X3 and the interactions between X2-X4 as well as X3-X4 have significant effects on AFG1 reduction. The model provided a good prediction of AFG1 reduction under the assay conditions. Optimization was used to enhance the efficiency of kefir-grain on AFG1 reduction. The optimum conditions for the highest AFG1 reduction (96.8%) were predicted by the model as follows: toxin concentration = 20 ng/g, kefir-grain level = 10%, contact time = 6 h, and incubation temperature = 30°C which validated practically in six replications.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Optimization of palm fruit sterilization by microwave irradiation using response surface methodology
NASA Astrophysics Data System (ADS)
Sarah, M.; Madinah, I.; Salamah, S.
2018-02-01
This study reported optimization of palm fruit sterilization process by microwave irradiation. The results of fractional factorial experiments showed no significant external factors affecting temperature of microwave sterilization (MS). Response surface methodology (RSM) was employed and model equation of MS of palm fruit was built. Response surface plots and their corresponding contour plots were analyzed as well as solving model equation. The optimum process parameters for lipase reduction were obtained from MS of 1 kg palm fruit at microwave power of 486 Watt and heating time of 14 minutes. The experimental results showed reduction of lipase activity in the present work under MS treatment. The adequacy of the model equation for predicting the optimum response value was verified by validation data (P>0.15).
Singh, Kunwar P; Singh, Arun K; Gupta, Shikha; Rai, Premanjali
2012-07-01
The present study aims to investigate the individual and combined effects of temperature, pH, zero-valent bimetallic nanoparticles (ZVBMNPs) dose, and chloramphenicol (CP) concentration on the reductive degradation of CP using ZVBMNPs in aqueous medium. Iron-silver ZVBMNPs were synthesized. Batch experimental data were generated using a four-factor statistical experimental design. CP reduction by ZVBMNPs was optimized using the response surface modeling (RSM) and artificial neural network-genetic algorithm (ANN-GA) approaches. The RSM and ANN methodologies were also compared for their predictive and generalization abilities using the same training and validation data set. Reductive by-products of CP were identified using liquid chromatography-mass spectrometry technique. The optimized process variables (RSM and ANN-GA approaches) yielded CP reduction capacity of 57.37 and 57.10 mg g(-1), respectively, as compared to the experimental value of 54.0 mg g(-1) with un-optimized variables. The ANN-GA and RSM methodologies yielded comparable results and helped to achieve a higher reduction (>6%) of CP by the ZVBMNPs as compared to the experimental value. The root mean squared error, relative standard error of prediction and correlation coefficient between the measured and model-predicted values of response variable were 1.34, 3.79, and 0.964 for RSM and 0.03, 0.07, and 0.999 for ANN models for the training and 1.39, 3.47, and 0.996 for RSM and 1.25, 3.11, and 0.990 for ANN models for the validation set. Predictive and generalization abilities of both the RSM and ANN models were comparable. The synthesized ZVBMNPs may be used for an efficient reductive removal of CP from the water.
Ganesan, Balasubramanian; Martini, Silvana; Solorio, Jonathan; Walsh, Marie K
2015-01-01
This study investigated the effects of high intensity ultrasound (temperature, amplitude, and time) on the inactivation of indigenous bacteria in pasteurized milk, Bacillus atrophaeus spores inoculated into sterile milk, and Saccharomyces cerevisiae inoculated into sterile orange juice using response surface methodology. The variables investigated were sonication temperature (range from 0 to 84°C), amplitude (range from 0 to 216 μm), and time (range from 0.17 to 5 min) on the response, log microbe reduction. Data were analyzed by statistical analysis system software and three models were developed, each for bacteria, spore, and yeast reduction. Regression analysis identified sonication temperature and amplitude to be significant variables on microbe reduction. Optimization of the inactivation of microbes was found to be at 84.8°C, 216 μm amplitude, and 5.8 min. In addition, the predicted log reductions of microbes at common processing conditions (72°C for 20 sec) using 216 μm amplitude were computed. The experimental responses for bacteria, spore, and yeast reductions fell within the predicted levels, confirming the accuracy of the models.
Martini, Silvana; Solorio, Jonathan; Walsh, Marie K.
2015-01-01
This study investigated the effects of high intensity ultrasound (temperature, amplitude, and time) on the inactivation of indigenous bacteria in pasteurized milk, Bacillus atrophaeus spores inoculated into sterile milk, and Saccharomyces cerevisiae inoculated into sterile orange juice using response surface methodology. The variables investigated were sonication temperature (range from 0 to 84°C), amplitude (range from 0 to 216 μm), and time (range from 0.17 to 5 min) on the response, log microbe reduction. Data were analyzed by statistical analysis system software and three models were developed, each for bacteria, spore, and yeast reduction. Regression analysis identified sonication temperature and amplitude to be significant variables on microbe reduction. Optimization of the inactivation of microbes was found to be at 84.8°C, 216 μm amplitude, and 5.8 min. In addition, the predicted log reductions of microbes at common processing conditions (72°C for 20 sec) using 216 μm amplitude were computed. The experimental responses for bacteria, spore, and yeast reductions fell within the predicted levels, confirming the accuracy of the models. PMID:26904659
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
Bi, Jian
2010-01-01
As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering
NASA Astrophysics Data System (ADS)
van de Walle, A.; Naets, F.; Desmet, W.
2018-05-01
This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.
Vijayalakshmi, Subramanian; Nadanasabhapathi, Shanmugam; Kumar, Ranganathan; Sunny Kumar, S
2018-03-01
The presence of aflatoxin, a carcinogenic and toxigenic secondary metabolite produced by Aspergillus species, in food matrix has been a major worldwide problem for years now. Food processing methods such as roasting, extrusion, etc. have been employed for effective destruction of aflatoxins, which are known for their thermo-stable nature. The high temperature treatment, adversely affects the nutritive and other quality attributes of the food, leading to the necessity of application of non-thermal processing techniques such as ultrasonication, gamma irradiation, high pressure processing, pulsed electric field (PEF), etc. The present study was focused on analysing the efficacy of the PEF process in the reduction of the toxin content, which was subsequently quantified using HPLC. The process parameters of different pH model system (potato dextrose agar) artificially spiked with aflatoxin mix standard was optimized using the response surface methodology. The optimization of PEF process effects on the responses aflatoxin B1 and total aflatoxin reduction (%) by pH (4-10), pulse width (10-26 µs) and output voltage (20-65%), fitted 2FI model and quadratic model respectively. The response surface plots obtained for the processes were of saddle point type, with the absence of minimum or maximum response at the centre point. The implemented numerical optimization showed that the predicted and actual values were similar, proving the adequacy of the fitted models and also proved the possible application of PEF in toxin reduction.
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons
NASA Technical Reports Server (NTRS)
Arunkumar, Satyanarayana; Przekop, Adam
2010-01-01
Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.
Hybrid CMS methods with model reduction for assembly of structures
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1991-01-01
Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.
NASA Astrophysics Data System (ADS)
Scholz-Reiter, B.; Wirth, F.; Dashkovskiy, S.; Makuschewitz, T.; Schönlein, M.; Kosmykov, M.
2011-12-01
We investigate the problem of model reduction with a view to large-scale logistics networks, specifically supply chains. Such networks are modeled by means of graphs, which describe the structure of material flow. An aim of the proposed model reduction procedure is to preserve important features within the network. As a new methodology we introduce the LogRank as a measure for the importance of locations, which is based on the structure of the flows within the network. We argue that these properties reflect relative importance of locations. Based on the LogRank we identify subgraphs of the network that can be neglected or aggregated. The effect of this is discussed for a few motifs. Using this approach we present a meta algorithm for structure-preserving model reduction that can be adapted to different mathematical modeling frameworks. The capabilities of the approach are demonstrated with a test case, where a logistics network is modeled as a Jackson network, i.e., a particular type of queueing network.
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
A Decision Support Methodology for Space Technology Advocacy.
1984-12-01
determine their parameters. Program control is usually exercised by level of effort funding. 63xx is the designator for advanced development pro- grams... designing systems or models that successfully aid the decision-maker. One remedy for this deficiency in the techniques is to increase the...methodology for use by the Air Force Space Technology Advocate is designed to provide the following features [l11:146-1471: meaningful reduction of available
Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft
NASA Technical Reports Server (NTRS)
Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter
2016-01-01
This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.
Creep force modelling for rail traction vehicles based on the Fastsim algorithm
NASA Astrophysics Data System (ADS)
Spiryagin, Maksym; Polach, Oldrich; Cole, Colin
2013-11-01
The evaluation of creep forces is a complex task and their calculation is a time-consuming process for multibody simulation (MBS). A methodology of creep forces modelling at large traction creepages has been proposed by Polach [Creep forces in simulations of traction vehicles running on adhesion limit. Wear. 2005;258:992-1000; Influence of locomotive tractive effort on the forces between wheel and rail. Veh Syst Dyn. 2001(Suppl);35:7-22] adapting his previously published algorithm [Polach O. A fast wheel-rail forces calculation computer code. Veh Syst Dyn. 1999(Suppl);33:728-739]. The most common method for creep force modelling used by software packages for MBS of running dynamics is the Fastsim algorithm by Kalker [A fast algorithm for the simplified theory of rolling contact. Veh Syst Dyn. 1982;11:1-13]. However, the Fastsim code has some limitations which do not allow modelling the creep force - creep characteristic in agreement with measurements for locomotives and other high-power traction vehicles, mainly for large traction creep at low-adhesion conditions. This paper describes a newly developed methodology based on a variable contact flexibility increasing with the ratio of the slip area to the area of adhesion. This variable contact flexibility is introduced in a modification of Kalker's code Fastsim by replacing the constant Kalker's reduction factor, widely used in MBS, by a variable reduction factor together with a slip-velocity-dependent friction coefficient decreasing with increasing global creepage. The proposed methodology is presented in this work and compared with measurements for different locomotives. The modification allows use of the well recognised Fastsim code for simulation of creep forces at large creepages in agreement with measurements without modifying the proven modelling methodology at small creepages.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, K. B.; Peter, A. M.
2017-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, Kyle B.; Peter, Adrian M.
2016-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Our research focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H
2013-08-01
Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.
High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics
Carvalho, Carlos M.; Chang, Jeffrey; Lucas, Joseph E.; Nevins, Joseph R.; Wang, Quanli; West, Mike
2010-01-01
We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived “factors” as representing biological “subpathway” structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple comparisons, as well as scalability of the methodology. The models include practically relevant non-Gaussian/nonparametric components for latent structure, underlying often quite complex non-Gaussianity in multivariate expression patterns. Model search and fitting are addressed through stochastic simulation and evolutionary stochastic search methods that are exemplified in the oncogenic pathway studies. Supplementary supporting material provides more details of the applications, as well as examples of the use of freely available software tools for implementing the methodology. PMID:21218139
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less
Modeling Business Processes in Public Administration
NASA Astrophysics Data System (ADS)
Repa, Vaclav
During more than 10 years of its existence business process modeling became a regular part of organization management practice. It is mostly regarded. as a part of information system development or even as a way to implement some supporting technology (for instance workflow system). Although I do not agree with such reduction of the real meaning of a business process, it is necessary to admit that information technologies play an essential role in business processes (see [1] for more information), Consequently, an information system is inseparable from a business process itself because it is a cornerstone of the general basic infrastructure of a business. This fact impacts on all dimensions of business process management. One of these dimensions is the methodology that postulates that the information systems development provide the business process management with exact methods and tools for modeling business processes. Also the methodology underlying the approach presented in this paper has its roots in the information systems development methodology.
NASA Astrophysics Data System (ADS)
Cui, Yiqian; Shi, Junyou; Wang, Zili
2017-11-01
Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.
Common Methodology for Efficient Airspace Operations
NASA Technical Reports Server (NTRS)
Sridhar, Banavar
2012-01-01
Topics include: a) Developing a common methodology to model and avoid disturbances affecting airspace. b) Integrated contrails and emission models to a national level airspace simulation. c) Developed capability to visualize, evaluate technology and alternate operational concepts and provide inputs for policy-analysis tools to reduce the impact of aviation on the environment. d) Collaborating with Volpe Research Center, NOAA and DLR to leverage expertise and tools in aircraft emissions and weather/climate modeling. Airspace operations is a trade-off balancing safety, capacity, efficiency and environmental considerations. Ideal flight: Unimpeded wind optimal route with optimal climb and descent. Operations degraded due to reduction in airport and airspace capacity caused by inefficient procedures and disturbances.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Nanoscale Fe/Ag particles activated persulfate: optimization using response surface methodology.
Silveira, Jefferson E; Barreto-Rodrigues, Marcio; Cardoso, Tais O; Pliego, Gema; Munoz, Macarena; Zazo, Juan A; Casas, José A
2017-05-01
This work studied the bimetallic nanoparticles Fe-Ag (nZVI-Ag) activated persulfate (PS) in aqueous solution using response surface methodology. The Box-Behnken design (BBD) was employed to optimize three parameters (nZVI-Ag dose, reaction temperature, and PS concentration) using 4-chlorophenol (4-CP) as the target pollutant. The synthesis of nZVI-Ag particles was carried out through a reduction of FeCl 2 with NaBH 4 followed by reductive deposition of Ag. The catalyst was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and Brunauer-Emmett-Teller (BET) surface area. The BBD was considered a satisfactory model to optimize the process. Confirmatory tests were carried out using predicted and experimental values under the optimal conditions (50 mg L -1 nZVI-Ag, 21 mM PS at 57 °C) and the complete removal of 4-CP achieved experimentally was successfully predicted by the model, whereas the mineralization degree predicted (90%) was slightly overestimated against the measured data (83%).
NASA Astrophysics Data System (ADS)
Ando, T.; Kawasaki, A.; Koike, T.
2017-12-01
IPCC AR5 (2014) reported that rainfall in the middle latitudes of the Northern Hemisphere has been increasing since 1901, and it is claimed that warmer climate will increase the risk of floods. In contrast, world water demand is forecasted to exceed a sustainable supply by 40 percent by 2030. In order to avoid this expectable water shortage, securing new water resources has become an utmost challenge. However, flood risk prevention and the secure of water resources are contradictory. To solve this problem, we can use existing hydroelectric dams not only as energy resources but also for flood control. However, in case of Japan, hydroelectric dams take no responsibility for it, and benefits have not been discussed accrued by controlling flood by hydroelectric dams, namely by using preliminary water release from them. Therefore, our paper proposes methodology for assessing those benefits. This methodology has three stages as shown in Fig. 1. First, RRI model is used to model flood events, taking account of the probability of rainfall. Second, flood damage is calculated using assets in inundation areas multiplied by the inundation depths generated by that RRI model. Third, the losses stemming from preliminary water release are calculated, and adding them to flood damage, overall losses are calculated. We can evaluate the benefits by changing the volume of preliminary release. As a result, shown in Fig. 2, the use of hydroelectric dams to control flooding creates 20 billion Yen benefits, in the probability of three-day-ahead rainfall prediction of the assumed maximum rainfall in Oi River, in the Shizuoka Pref. of Japan. As the third priority in the Sendai Framework for Disaster Risk Reduction 2015-2030, `investing in disaster risk reduction for resilience - public and private investment in disaster risk prevention and reduction through structural and non-structural measures' was adopted. The accuracy of rainfall prediction is the key factor in maximizing the benefits. Therefore, if the accrued 20 billion Yen benefits by adopting this evaluation methodology are invested in improving rainfall prediction, the accuracy of the forecasts will increase and so will the benefits. This positive feedback loop will benefit society. The results of this study may stimulate further discussion on the role of hydroelectric dams in flood control.
Subha, Bakthavachallam; Song, Young Chae; Woo, Jung Hui
2015-09-15
The present study aims to optimize the slow release biostimulant ball (BSB) for bioremediation of contaminated coastal sediment using response surface methodology (RSM). Different bacterial communities were evaluated using a pyrosequencing-based approach in contaminated coastal sediments. The effects of BSB size (1-5cm), distance (1-10cm) and time (1-4months) on changes in chemical oxygen demand (COD) and volatile solid (VS) reduction were determined. Maximum reductions of COD and VS, 89.7% and 78.8%, respectively, were observed at a 3cm ball size, 5.5cm distance and 4months; these values are the optimum conditions for effective treatment of contaminated coastal sediment. Most of the variance in COD and VS (0.9291 and 0.9369, respectively) was explained in our chosen models. BSB is a promising method for COD and VS reduction and enhancement of SRB diversity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
Methodological and hermeneutic reduction - a study of Finnish multiple-birth families.
Heinonen, Kristiina
2015-07-01
To describe reduction as a method in methodological and hermeneutic reduction and the hermeneutic circle using van Manen's principles, with the empirical example of the lifeworlds of multiple-birth families in Finland. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. Open interviews with public health nurses, family care workers and parents of twins. The systematic literature and knowledge review shows there were no articles on multiple-birth families that used van Manen's method. This paper presents reduction as a method that uses the hermeneutic circle. The lifeworlds of multiple-birth families consist of three core themes: 'A state of constant vigilance'; 'Ensuring that they can continue to cope'; and 'Opportunities to share with other people'. Reduction allows us to perform deep phenomenological-hermeneutic research and understand people's lifeworlds. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they also need further tools and training to be able to empower parents of twins. The many variations in adapting reduction mean its use can be very complex and confusing. This paper adds to the discussion of phenomenology, hermeneutic study and reduction.
Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.
Projecting effects of improvements in passive safety of the New Zealand light vehicle fleet.
Keall, Michael; Newstead, Stuart; Jones, Wayne
2007-09-01
In the year 2000, as part of the process for setting New Zealand road safety targets, a projection was made for a reduction in social cost of 15.5 percent associated with improvements in crashworthiness, which is a measure of the occupant protection of the light passenger vehicle fleet. Since that document was produced, new estimates of crashworthiness have become available, allowing for a more accurate projection. The objective of this paper is to describe a methodology for projecting changes in casualty rates associated with passive safety features and to apply this methodology to produce a new prediction. The shape of the age distribution of the New Zealand light passenger vehicle fleet was projected to 2010. Projected improvements in crashworthiness and associated reductions in social cost were also modeled based on historical trends. These projections of changes in the vehicle fleet age distribution and of improvements in crashworthiness together provided a basis for estimating the future performance of the fleet in terms of secondary safety. A large social cost reduction of about 22 percent for 2010 compared to the year 2000 was predicted due to the expected huge impact of improvements in passive vehicle features on road trauma in New Zealand. Countries experiencing improvements in their vehicle fleets can also expect significant reductions in road injury compared to a less crashworthy passenger fleet. Such road safety gains can be analyzed using some of the methodology described here.
Power fluctuation reduction methodology for the grid-connected renewable power systems
NASA Astrophysics Data System (ADS)
Aula, Fadhil T.; Lee, Samuel C.
2013-04-01
This paper presents a new methodology for eliminating the influence of the power fluctuations of the renewable power systems. The renewable energy, which is to be considered an uncertain and uncontrollable resource, can only provide irregular electrical power to the power grid. This irregularity creates fluctuations of the generated power from the renewable power systems. These fluctuations cause instability to the power system and influence the operation of conventional power plants. Overall, the power system is vulnerable to collapse if necessary actions are not taken to reduce the impact of these fluctuations. This methodology aims at reducing these fluctuations and makes the generated power capability for covering the power consumption. This requires a prediction tool for estimating the generated power in advance to provide the range and the time of occurrence of the fluctuations. Since most of the renewable energies are weather based, as a result a weather forecast technique will be used for predicting the generated power. The reduction of the fluctuation also requires stabilizing facilities to maintain the output power at a desired level. In this study, a wind farm and a photovoltaic array as renewable power systems and a pumped-storage and batteries as stabilizing facilities are used, since they are best suitable for compensating the fluctuations of these types of power suppliers. As an illustrative example, a model of wind and photovoltaic power systems with battery energy and pumped hydro storage facilities for power fluctuation reduction is included, and its power fluctuation reduction is verified through simulation.
Building Modelling Methodologies for Virtual District Heating and Cooling Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Choudhury, Anamitra R.; Chandan, Vikas
District heating and cooling systems (DHC) are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., inmore » order to increase the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components interacting with each other. In this paper we present two building methodologies to model the consumer buildings. These models will be further integrated with network model and the control system layer to create a virtual test bed for the entire DHC system. The model is validated using data collected from a real life DHC system located at Lulea, a city on the coast of northern Sweden. The test bed will be then used for simulating various test cases such as peak energy reduction, overall demand reduction etc.« less
NASA Astrophysics Data System (ADS)
Guzman, Diego; Mohor, Guilherme; Câmara, Clarissa; Mendiondo, Eduardo
2017-04-01
Researches from around the world relate global environmental changes with the increase of vulnerability to extreme events, such as heavy and scarce precipitations - floods and droughts. Hydrological disasters have caused increasing losses in recent years. Thus, risk transfer mechanisms, such as insurance, are being implemented to mitigate impacts, finance the recovery of the affected population, and promote the reduction of hydrological risks. However, among the main problems in implementing these strategies, there are: First, the partial knowledge of natural and anthropogenic climate change in terms of intensity and frequency; Second, the efficient risk reduction policies require accurate risk assessment, with careful consideration of costs; Third, the uncertainty associated with numerical models and input data used. The objective of this document is to introduce and discuss the feasibility of the application of Hydrological Risk Transfer Models (HRTMs) as a strategy of adaptation to global climate change. The article shows the development of a methodology for the collective and multi-sectoral vulnerability management, facing the hydrological risk in the long term, under an insurance funds simulator. The methodology estimates the optimized premium as a function of willingness to pay (WTP) and the potential direct loss derived from hydrological risk. The proposed methodology structures the watershed insurance scheme in three analysis modules. First, the hazard module, which characterizes the hydrologic threat from the recorded series input or modelled series under IPCC / RCM's generated scenarios. Second, the vulnerability module calculates the potential economic loss for each sector1 evaluated as a function of the return period "TR". Finally, the finance module determines the value of the optimal aggregate premium by evaluating equiprobable scenarios of water vulnerability; taking into account variables such as the maximum limit of coverage, deductible, reinsurance schemes, and incentives for risk reduction. The methodology tested by members of the Integrated Nucleus of River Basins (NIBH) (University of Sao Paulo (USP) School of Engineering of São Carlos (EESC) - Brazil) presents an alternative to the analysis and planning of insurance funds, aiming to mitigate the impacts of hydrological droughts and stream flash floods. The presented procedure is especially important when information relevant to studies and the development and implementation of insurance funds are difficult to access and of complex evaluation. A sequence of academic applications has been made in Brazil under the South American context, where the market of hydrological insurance has a low penetration compared to developed economies and insurance markets more established as the United States and Europe, producing relevant information and demonstrating the potential of the methodology in development.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Sizing the science data processing requirements for EOS
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi
1991-01-01
The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.
Zang, Y T; Li, B M; Bing, Sh; Cao, W
2015-09-01
In order to reduce the risk of enteric pathogens transmission in animal farms, the disinfection effectiveness of slightly acidic electrolyzed water (SAEW, pH 5.85 to 6.53) for inactivating Salmonella Enteritidis on the surface of plastic poultry transport cages was evaluated. The coupled effects of the tap water cleaning time (5 to 15 s), SAEW treatment time (20 to 40 s), and available chlorine concentrations (ACCs) of 30 to 70 mg/l on the reductions of S. Enteritidis on chick cages were investigated using a central composite design of the response surface methodology (RSM). The established RS model had a goodness of fit quantified by the parameter R2 (0.971), as well as a lack of fit test (P>0.05). The maximum reduction of 3.12 log10 CFU/cm2 for S. Enteritidis was obtained for the cage treated with tap water cleaning for 15 s followed by SAEW treatment for 40 s at an ACC of 50 mg/l. Results indicate that the established RS model has shown the potential of SAEW in disinfection of bacteria on cages. © 2015 Poultry Science Association Inc.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
NASA Astrophysics Data System (ADS)
Chavez, E.
2015-12-01
Future climate projections indicate that a very serious consequence of post-industrial anthropogenic global warming is the likelihood of the greater frequency and intensity of extreme hydrometeorological events such as heat waves, droughts, storms, and floods. The design of national and international policies targeted at building more resilient and environmentally sustainable food systems needs to rely on access to robust and reliable data which is largely absent. In this context, the improvement of the modelling of current and future agricultural production losses using the unifying language of risk is paramount. In this study, we use a methodology that allows the integration of the current understanding of the various interacting systems of climate, agro-environment, crops, and the economy to determine short to long-term risk estimates of crop production loss, in different environmental, climate, and adaptation scenarios. This methodology is applied to Tanzania to assess optimum risk reduction and maize production increase paths in different climate scenarios. The simulations carried out use inputs from three different crop models (DSSAT, APSIM, WRSI) run in different technological scenarios and thus allowing to estimate crop model-driven risk exposure estimation bias. The results obtained also allow distinguishing different region-specific optimum climate risk reduction policies subject to historical as well as RCP2.5 and RCP8.5 climate scenarios. The region-specific risk profiles obtained provide a simple framework to determine cost-effective risk management policies for Tanzania and allow to optimally combine investments in risk reduction and risk transfer.
Parametric assessment of climate change impacts of automotive material substitution.
Geyer, Roland
2008-09-15
Quantifying the net climate change impact of automotive material substitution is not a trivial task. It requires the assessment of the mass reduction potential of automotive materials, the greenhouse gas (GHG) emissions from their production and recycling, and their impact on GHG emissions from vehicle use. The model presented in this paper is based on life cycle assessment (LCA) and completely parameterized, i.e., its computational structure is separated from the required input data, which is not traditionally done in LCAs. The parameterization increases scientific rigor and transparency of the assessment methodology, facilitates sensitivity and uncertainty analysis of the results, and also makes it possible to compare different studies and explain their disparities. The state of the art of the modeling methodology is reviewed and advanced. Assessment of the GHG emission impacts of material recycling through consequential system expansion shows that our understanding of this issue is still incomplete. This is a critical knowledge gap since a case study shows thatfor materials such as aluminum, the GHG emission impacts of material production and recycling are both of the same size as the use phase savings from vehicle mass reduction.
NASA Astrophysics Data System (ADS)
Larrañeta, M.; Moreno-Tejera, S.; Lillo-Bravo, I.; Silva-Pérez, M. A.
2018-02-01
Many of the available solar radiation databases only provide global horizontal irradiance (GHI) while there is a growing need of extensive databases of direct normal radiation (DNI) mainly for the development of concentrated solar power and concentrated photovoltaic technologies. In the present work, we propose a methodology for the generation of synthetic DNI hourly data from the hourly average GHI values by dividing the irradiance into a deterministic and stochastic component intending to emulate the dynamics of the solar radiation. The deterministic component is modeled through a simple classical model. The stochastic component is fitted to measured data in order to maintain the consistency of the synthetic data with the state of the sky, generating statistically significant DNI data with a cumulative frequency distribution very similar to the measured data. The adaptation and application of the model to the location of Seville shows significant improvements in terms of frequency distribution over the classical models. The proposed methodology applied to other locations with different climatological characteristics better results than the classical models in terms of frequency distribution reaching a reduction of the 50% in the Finkelstein-Schafer (FS) and Kolmogorov-Smirnov test integral (KSI) statistics.
Yuan, Hongping; Chini, Abdol R; Lu, Yujie; Shen, Liyin
2012-03-01
During the past few decades, construction and demolition (C&D) waste has received increasing attention from construction practitioners and researchers worldwide. A plethora of research regarding C&D waste management has been published in various academic journals. However, it has been determined that existing studies with respect to C&D waste reduction are mainly carried out from a static perspective, without considering the dynamic and interdependent nature of the whole waste reduction system. This might lead to misunderstanding about the actual effect of implementing any waste reduction strategies. Therefore, this research proposes a model that can serve as a decision support tool for projecting C&D waste reduction in line with the waste management situation of a given construction project, and more importantly, as a platform for simulating effects of various management strategies on C&D waste reduction. The research is conducted using system dynamics methodology, which is a systematic approach that deals with the complexity - interrelationships and dynamics - of any social, economic and managerial system. The dynamic model integrates major variables that affect C&D waste reduction. In this paper, seven causal loop diagrams that can deepen understanding about the feedback relationships underlying C&D waste reduction system are firstly presented. Then a stock-flow diagram is formulated by using software for system dynamics modeling. Finally, a case study is used to illustrate the validation and application of the proposed model. Results of the case study not only built confidence in the model so that it can be used for quantitative analysis, but also assessed and compared the effect of three designed policy scenarios on C&D waste reduction. One major contribution of this study is the development of a dynamic model for evaluating C&D waste reduction strategies under various scenarios, so that best management strategies could be identified before being implemented in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
Air quality impacts of distributed energy resources implemented in the northeastern United States.
Carreras-Sospedra, Marc; Dabdub, Donald; Brouwer, Jacob; Knipping, Eladio; Kumar, Naresh; Darrow, Ken; Hampson, Anne; Hedman, Bruce
2008-07-01
Emissions from the potential installation of distributed energy resources (DER) in the place of current utility-scale power generators have been introduced into an emissions inventory of the northeastern United States. A methodology for predicting future market penetration of DER that considers economics and emission factors was used to estimate the most likely implementation of DER. The methodology results in spatially and temporally resolved emission profiles of criteria pollutants that are subsequently introduced into a detailed atmospheric chemistry and transport model of the region. The DER technology determined by the methodology includes 62% reciprocating engines, 34% gas turbines, and 4% fuel cells and other emerging technologies. The introduction of DER leads to retirement of 2625 MW of existing power plants for which emissions are removed from the inventory. The air quality model predicts maximum differences in air pollutant concentrations that are located downwind from the central power plants that were removed from the domain. Maximum decreases in hourly peak ozone concentrations due to DER use are 10 ppb and are located over the state of New Jersey. Maximum decreases in 24-hr average fine particulate matter (PM2.5) concentrations reach 3 microg/m3 and are located off the coast of New Jersey and New York. The main contribution to decreased PM2.5 is the reduction of sulfate levels due to significant reductions in direct emissions of sulfur oxides (SO(x)) from the DER compared with the central power plants removed. The scenario presented here represents an accelerated DER penetration case with aggressive emission reductions due to removal of highly emitting power plants. Such scenario provides an upper bound for air quality benefits of DER implementation scenarios.
NASA Astrophysics Data System (ADS)
Yin, Shaohua; Lin, Guo; Li, Shiwei; Peng, Jinhui; Zhang, Libo
2016-09-01
Microwave heating has been applied in the field of drying rare earth carbonates to improve drying efficiency and reduce energy consumption. The effects of power density, material thickness and drying time on the weight reduction (WR) are studied using response surface methodology (RSM). The results show that RSM is feasible to describe the relationship between the independent variables and weight reduction. Based on the analysis of variance (ANOVA), the model is in accordance with the experimental data. The optimum experiment conditions are power density 6 w/g, material thickness 15 mm and drying time 15 min, resulting in an experimental weight reduction of 73%. Comparative experiments show that microwave drying has the advantages of rapid dehydration and energy conservation. Particle analysis shows that the size distribution of rare earth carbonates after microwave drying is more even than those in an oven. Based on these findings, microwave heating technology has an important meaning to energy-saving and improvement of production efficiency for rare earth smelting enterprises and is a green heating process.
NASA Technical Reports Server (NTRS)
Chan, David T.; Milholen, William E., II; Jones, Gregory S.; Goodliff, Scott L.
2014-01-01
A second wind tunnel test of the FAST-MAC circulation control semi-span model was recently completed in the National Transonic Facility at the NASA Langley Research Center. The model allowed independent control of four circulation control plenums producing a high momentum jet from a blowing slot near the wing trailing edge that was directed over a 15% chord simple-hinged flap. The model was configured for transonic testing of the cruise configuration with 0deg flap deflection to determine the potential for drag reduction with the circulation control blowing. Encouraging results from analysis of wing surface pressures suggested that the circulation control blowing was effective in reducing the transonic drag on the configuration, however this could not be quantified until the thrust generated by the blowing slot was correctly removed from the force and moment balance data. This paper will present the thrust removal methodology used for the FAST-MAC circulation control model and describe the experimental measurements and techniques used to develop the methodology. A discussion on the impact to the force and moment data as a result of removing the thrust from the blowing slot will also be presented for the cruise configuration, where at some Mach and Reynolds number conditions, the thrust-removed corrected data showed that a drag reduction was realized as a consequence of the blowing.
The SIMRAND methodology - Simulation of Research and Development Projects
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1984-01-01
In research and development projects, a commonly occurring management decision is concerned with the optimum allocation of resources to achieve the project goals. Because of resource constraints, management has to make a decision regarding the set of proposed systems or tasks which should be undertaken. SIMRAND (Simulation of Research and Development Projects) is a methodology which was developed for aiding management in this decision. Attention is given to a problem description, aspects of model formulation, the reduction phase of the model solution, the simulation phase, and the evaluation phase. The implementation of the considered approach is illustrated with the aid of an example which involves a simplified network of the type used to determine the price of silicon solar cells.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
Near-wall turbulence alteration through thin streamwise riblets
NASA Technical Reports Server (NTRS)
Wilkinson, Stephen P.; Lazos, Barry S.
1987-01-01
The possibility of improving the level of drag reduction associated with near-wall riblets is considered. The methodology involves the use of a hot-wire anemometer to study various surface geometries on small, easily constructed models. These models consist of small, adjacent rectangular channels on the wall aligned in the streamwise direction. The VITA technique is modified and applied to thin-element-array and smooth flat-plate data and the results are indicated schematically.
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
A methodology to assess the economic impact of power storage technologies.
El-Ghandour, Laila; Johnson, Timothy C
2017-08-13
We present a methodology for assessing the economic impact of power storage technologies. The methodology is founded on classical approaches to the optimal stopping of stochastic processes but involves an innovation that circumvents the need to, ex ante , identify the form of a driving process and works directly on observed data, avoiding model risks. Power storage is regarded as a complement to the intermittent output of renewable energy generators and is therefore important in contributing to the reduction of carbon-intensive power generation. Our aim is to present a methodology suitable for use by policy makers that is simple to maintain, adaptable to different technologies and easy to interpret. The methodology has benefits over current techniques and is able to value, by identifying a viable optimal operational strategy, a conceived storage facility based on compressed air technology operating in the UK.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2006-11-01
A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.
NASA Astrophysics Data System (ADS)
Keen, A. S.; Lynett, P. J.; Ayca, A.
2016-12-01
Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once calibrated, the model was able to hindcast the damage produced in Santa Cruz Harbor during the 2010 Chile and 2011 Japan events. Results of the Santa Cruz analysis will be presented and discussed.
NASA's Quiet Aircraft Technology Project
NASA Technical Reports Server (NTRS)
Whitfield, Charlotte E.
2004-01-01
NASA's Quiet Aircraft Technology Project is developing physics-based understanding, models and concepts to discover and realize technology that will, when implemented, achieve the goals of a reduction of one-half in perceived community noise (relative to 1997) by 2007 and a further one-half in the far term. Noise sources generated by both the engine and the airframe are considered, and the effects of engine/airframe integration are accounted for through the propulsion airframe aeroacoustics element. Assessments of the contribution of individual source noise reductions to the reduction in community noise are developed to guide the work and the development of new tools for evaluation of unconventional aircraft is underway. Life in the real world is taken into account with the development of more accurate airport noise models and flight guidance methodology, and in addition, technology is being developed that will further reduce interior noise at current weight levels or enable the use of lighter-weight structures at current noise levels.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2015-02-01
Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
ERIC Educational Resources Information Center
Misra, Anjali; Schloss, Patrick J.
1989-01-01
The critical analysis of 23 studies using respondent techniques for the reduction of excessive emotional reactions in school children focuses on research design, dependent variables, independent variables, component analysis, and demonstrations of generalization and maintenance. Results indicate widespread methodological flaws that limit the…
NASA Astrophysics Data System (ADS)
Dai, Xiaoyu; Haussener, Sophia
2018-02-01
A multi-scale methodology for the radiative transfer analysis of heterogeneous media composed of morphologically-complex components on two distinct scales is presented. The methodology incorporates the exact morphology at the various scales and utilizes volume-averaging approaches with the corresponding effective properties to couple the scales. At the continuum level, the volume-averaged coupled radiative transfer equations are solved utilizing (i) effective radiative transport properties obtained by direct Monte Carlo simulations at the pore level, and (ii) averaged bulk material properties obtained at particle level by Lorenz-Mie theory or discrete dipole approximation calculations. This model is applied to a soot-contaminated snow layer, and is experimentally validated with reflectance measurements of such layers. A quantitative and decoupled understanding of the morphological effect on the radiative transport is achieved, and a significant influence of the dual-scale morphology on the macroscopic optical behavior is observed. Our results show that with a small amount of soot particles, of the order of 1ppb in volume fraction, the reduction in reflectance of a snow layer with large ice grains can reach up to 77% (at a wavelength of 0.3 μm). Soot impurities modeled as compact agglomerates yield 2-3% lower reduction of the reflectance in a thick show layer compared to snow with soot impurities modeled as chain-like agglomerates. Soot impurities modeled as equivalent spherical particles underestimate the reflectance reduction by 2-8%. This study implies that the morphology of the heterogeneities in a media significantly affects the macroscopic optical behavior and, specifically for the soot-contaminated snow, indicates the non-negligible role of soot on the absorption behavior of snow layers. It can be equally used in technical applications for the assessment and optimization of optical performance in multi-scale media.
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Nelson, Austin A; Prabakar, Kumaraguru
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time simulators and test PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a ruin & reconstruct methodology that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-timemore » digital testing platform. Smart PV inverters were added to the realtime model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the feeders could be analyzed.« less
Balcazar, H; Alvarado, M; Ortiz, G
2011-01-01
This article describes 6 Salud Para Su Corazon (SPSC) family of programs that have addressed cardiovascular disease risk reduction in Hispanic communities facilitated by community health workers (CHWs) or Promotores de Salud (PS). A synopsis of the programs illustrates the designs and methodological approaches that combine community-based participatory research for 2 types of settings: community and clinical. Examples are provided as to how CHWs can serve as agents of change in these settings. A description is presented of a sustainability framework for the SPSC family of programs. Finally, implications are summarized for utilizing the SPSC CHW/PS model to inform ambulatory care management and policy.
The public health benefits of insulation retrofits in existing housing in the United States
Levy, Jonathan I; Nishioka, Yurika; Spengler, John D
2003-01-01
Background Methodological limitations make it difficult to quantify the public health benefits of energy efficiency programs. To address this issue, we developed a risk-based model to estimate the health benefits associated with marginal energy usage reductions and applied the model to a hypothetical case study of insulation retrofits in single-family homes in the United States. Methods We modeled energy savings with a regression model that extrapolated findings from an energy simulation program. Reductions of fine particulate matter (PM2.5) emissions and particle precursors (SO2 and NOx) were quantified using fuel-specific emission factors and marginal electricity analyses. Estimates of population exposure per unit emissions, varying by location and source type, were extrapolated from past dispersion model runs. Concentration-response functions for morbidity and mortality from PM2.5 were derived from the epidemiological literature, and economic values were assigned to health outcomes based on willingness to pay studies. Results In total, the insulation retrofits would save 800 TBTU (8 × 1014 British Thermal Units) per year across 46 million homes, resulting in 3,100 fewer tons of PM2.5, 100,000 fewer tons of NOx, and 190,000 fewer tons of SO2 per year. These emission reductions are associated with outcomes including 240 fewer deaths, 6,500 fewer asthma attacks, and 110,000 fewer restricted activity days per year. At a state level, the health benefits per unit energy savings vary by an order of magnitude, illustrating that multiple factors (including population patterns and energy sources) influence health benefit estimates. The health benefits correspond to $1.3 billion per year in externalities averted, compared with $5.9 billion per year in economic savings. Conclusion In spite of significant uncertainties related to the interpretation of PM2.5 health effects and other dimensions of the model, our analysis demonstrates that a risk-based methodology is viable for national-level energy efficiency programs. PMID:12740041
Methodological problems with gamma-ray burst hardness/intensity correlations
NASA Technical Reports Server (NTRS)
Schaefer, Bradley E.
1993-01-01
The hardness and intensity are easily measured quantities for all gamma-ray bursts (GRBs), and so, many past and current studies have sought correlations between them. This Letter presents many serious methodological problems with the practical definitions for both hardness and intensity. These difficulties are such that significant correlations can be easily introduced as artifacts of the reduction procedure. In particular, cosmological models of GRBs cannot be tested with hardness/intensity correlations with current instrumentation and the time evolution of the hardness in a given burst may be correlated with intensity for reasons that are unrelated to intrinsic change in the spectral shape.
NASA Astrophysics Data System (ADS)
Jathar, S. H.; Miracolo, M. A.; Presto, A. A.; Donahue, N. M.; Adams, P. J.; Robinson, A. L.
2012-10-01
We present a methodology to model secondary organic aerosol (SOA) formation from the photo-oxidation of unspeciated low-volatility organics (semi-volatile and intermediate volatile organic compounds) emitted by combustion systems. It is formulated using the volatility basis-set approach. Unspeciated low-volatility organics are classified by volatility and then allowed to react with the hydroxyl radical. The new methodology allows for larger reductions in volatility with each oxidation step than previous volatility basis set models, which is more consistent with the addition of common functional groups and similar to those used by traditional SOA models. The methodology is illustrated using data collected during two field campaigns that characterized the atmospheric evolution of dilute gas-turbine engine emissions using a smog chamber. In those experiments, photo-oxidation formed a significant amount of SOA, much of which could not be explained based on the emissions of traditional speciated precursors; we refer to the unexplained SOA as non-traditional SOA (NT-SOA). The NT-SOA can be explained by emissions of unspeciated low-volatility organics measured using sorbents. We show that the parameterization proposed by Robinson et al. (2007) is unable to explain the timing of the NT-SOA formation in the aircraft experiments because it assumes a very modest reduction in volatility of the precursors with every oxidation reaction. In contrast the new method better reproduces the NT-SOA formation. The NT-SOA yields estimated for the unspeciated low-volatility organic emissions in aircraft exhaust are similar to literature data for large n-alkanes and other low-volatility organics. The estimated yields vary with fuel composition (Jet Propellent-8 versus Fischer-Tropsch) and engine load (ground idle versus non-ground idle). The framework developed here is suitable for modeling SOA formation from emissions from other combustion systems.
Uher, Jana
2015-12-01
Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
Low-fidelity bench models for basic surgical skills training during undergraduate medical education.
Denadai, Rafael; Saad-Hossne, Rogério; Todelo, Andréia Padilha; Kirylko, Larissa; Souto, Luís Ricardo Martinhão
2014-01-01
It is remarkable the reduction in the number of medical students choosing general surgery as a career. In this context, new possibilities in the field of surgical education should be developed to combat this lack of interest. In this study, a program of surgical training based on learning with models of low-fidelity bench is designed as a complementary alternative to the various methodologies in the teaching of basic surgical skills during medical education, and to develop personal interests in career choice.
Model and controller reduction of large-scale structures based on projection methods
NASA Astrophysics Data System (ADS)
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.
Evaluation of Model-Based Training for Vertical Guidance Logic
NASA Technical Reports Server (NTRS)
Feary, Michael; Palmer, Everett; Sherry, Lance; Polson, Peter; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)
1997-01-01
This paper will summarize the results of a study which introduces a structured, model based approach to learning how the automated vertical guidance system works on a modern commercial air transport. The study proposes a framework to provide accurate and complete information in an attempt to eliminate confusion about 'what the system is doing'. This study will examine a structured methodology for organizing the ideas on which the system was designed, communicating this information through the training material, and displaying it in the airplane. Previous research on model-based, computer aided instructional technology has shown reductions in the amount of time to a specified level of competence. The lessons learned from the development of these technologies are well suited for use with the design methodology which was used to develop the vertical guidance logic for a large commercial air transport. The design methodology presents the model from which to derive the training material, and the content of information to be displayed to the operator. The study consists of a 2 X 2 factorial experiment which will compare a new method of training vertical guidance logic and a new type of display. The format of the material used to derive both the training and the display will be provided by the Operational Procedure Methodology. The training condition will compare current training material to the new structured format. The display condition will involve a change of the content of the information displayed into pieces that agree with the concepts with which the system was designed.
Schram-Bijkerk, D; van Kempen, E; Knol, A B; Kruize, H; Staatsen, B; van Kamp, I
2009-10-01
Few quantitative health impact assessments (HIAs) of transport policies have been published so far and there is a lack of a common methodology for such assessments. To evaluate the usability of existing HIA methodology to quantify health effects of transport policies at the local level. Health impact of two simulated but realistic transport interventions - speed limit reduction and traffic re-allocation - was quantified by selecting traffic-related exposures and health endpoints, modelling of population exposure, selecting exposure-effect relations and estimating the number of local traffic-related cases and disease burden, expressed in disability-adjusted life-years (DALYs), before and after the intervention. Exposure information was difficult to retrieve because of the local scale of the interventions, and exposure-effect relations for subgroups and combined effects were missing. Given uncertainty in the outcomes originating from this kind of missing information, simulated changes in population health by two local traffic interventions were estimated to be small (<5%), except for the estimated reduction in DALYs by less traffic accidents (60%) due to speed limit reduction. Quantitative HIA of transport policies at a local scale is possible, provided that data on exposures, the exposed population and their baseline health status are available. The interpretation of the HIA information should be carried out in the context of the quality of input data and assumptions and uncertainties of the analysis.
NASA Technical Reports Server (NTRS)
Salikuddin, M.; Martens, S.; Shin, H.; Majjigi, R. K.; Krejsa, Gene (Technical Monitor)
2002-01-01
The objective of this task was to develop a design methodology and noise reduction concepts for high bypass exhaust systems which could be applied to both existing production and new advanced engine designs. Special emphasis was given to engine cycles with bypass ratios in the range of 4:1 to 7:1, where jet mixing noise was a primary noise source at full power takeoff conditions. The goal of this effort was to develop the design methodology for mixed-flow exhaust systems and other novel noise reduction concepts that would yield 3 EPNdB noise reduction relative to 1992 baseline technology. Two multi-lobed mixers, a 22-lobed axisymmetric and a 21-lobed with a unique lobe, were designed. These mixers along with a confluent mixer were tested with several fan nozzles of different lengths with and without acoustic treatment in GEAE's Cell 41 under the current subtask (Subtask C). In addition to the acoustic and LDA tests for the model mixer exhaust systems, a semi-empirical noise prediction method for mixer exhaust system is developed. Effort was also made to implement flowfield data for noise prediction by utilizing MGB code. In general, this study established an aero and acoustic diagnostic database to calibrate and refine current aero and acoustic prediction tools.
Simulation as a surgical teaching model.
Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos
2018-01-01
Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldemir, Tunc; Denning, Richard; Catalyurek, Umit
Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, suchmore » as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.« less
Integrated Controls-Structures Design Methodology: Redesign of an Evolutionary Test Structure
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Joshi, Suresh M.
1997-01-01
An optimization-based integrated controls-structures design methodology for a class of flexible space structures is described, and the phase-0 Controls-Structures-Integration evolutionary model, a laboratory testbed at NASA Langley, is redesigned using this integrated design methodology. The integrated controls-structures design is posed as a nonlinear programming problem to minimize the control effort required to maintain a specified line-of-sight pointing performance, under persistent white noise disturbance. Static and dynamic dissipative control strategies are employed for feedback control, and parameters of these controllers are considered as the control design variables. Sizes of strut elements in various sections of the CEM are used as the structural design variables. Design guides for the struts are developed and employed in the integrated design process, to ensure that the redesigned structure can be effectively fabricated. The superiority of the integrated design methodology over the conventional design approach is demonstrated analytically by observing a significant reduction in the average control power needed to maintain specified pointing performance with the integrated design approach.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Martens, Leon; Goode, Grahame; Wold, Johan F H; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas
2014-01-01
To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield.
Martens, Leon; Goode, Grahame; Wold, Johan F. H.; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas
2014-01-01
Aims To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Methods Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. Results With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Conclusions Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield. PMID:24927475
NASA Astrophysics Data System (ADS)
Mould, Richard F.; Asselain, Bernard; DeRycke, Yann
2004-03-01
For breast cancer where the prognosis of early stage disease is very good and even when local recurrences do occur they can present several years after treatment, the hospital resources required for annual follow-up examinations of what can be several hundreds of patients are financially significant. If, therefore, there is some method to estimate a maximum length of follow-up Tmax necessary, then cost savings of physicians' time as well as outpatient workload reductions can be achieved. In modern oncology where expenses continue to increase exponentially due to staff salaries and the expense of chemotherapy drugs and of new treatment and imaging technology, the economic situation can no longer be ignored. The methodology of parametric modelling, based on the lognormal distribution is described, showing that useful estimates for Tmax can be made, by making a trade-off between Tmax and the fraction of patients who will experience a delay in detection of their local recurrence. This trade-off depends on the chosen tail of the lognormal. The methodology is described for stage T1 and T2 breast cancer and it is found that Tmax = 4 years which is a significant reduction on the usual maximum of 10 years of follow-up which is employed by many hospitals for breast cancer patients. The methodology is equally applicable for cancers at other sites where the prognosis is good and some local recurrences may not occur until several years post-treatment.
Storage and growth of denitrifiers in aerobic granules: part I. model development.
Ni, Bing-Jie; Yu, Han-Qing
2008-02-01
A mathematical model, based on the Activated Sludge Model No.3 (ASM3), is developed to describe the storage and growth activities of denitrifiers in aerobic granules under anoxic conditions. In this model, mass transfer, hydrolysis, simultaneous anoxic storage and growth, anoxic maintenance, and endogenous decay are all taken into account. The model established is implemented in the well-established AQUASIM simulation software. A combination of completely mixed reactor and biofilm reactor compartments provided by AQUASIM is used to simulate the mass transport and conversion processes occurring in both bulk liquid and granules. The modeling results explicitly show that the external substrate is immediately utilized for storage and growth at feast phase. More external substrates are diverted to storage process than the primary biomass production process. The model simulation indicates that the nitrate utilization rate (NUR) of granules-based denitrification process includes four linear phases of nitrate reduction. Furthermore, the methodology for determining the most important parameter in this model, that is, anoxic reduction factor, is established. (c) 2007 Wiley Periodicals, Inc.
A Quantitative Evaluation of SCEC Community Velocity Model Version 3.0
NASA Astrophysics Data System (ADS)
Chen, P.; Zhao, L.; Jordan, T. H.
2003-12-01
We present a systematic methodology for evaluating and improving 3D seismic velocity models using broadband waveform data from regional earthquakes. The operator that maps a synthetic waveform into an observed waveform is expressed in the Rytov form D(ω ) = {exp}[{i} ω δ τ {p}(ω ) - ω δ τ {q}(ω )]. We measure the phase delay time δ τ p(ω ) and the amplitude reduction time δ τ q(ω ) as a function of frequency ω using Gee & Jordan's [1992] isolation-filter technique, and we correct the data for frequency-dependent interference and frequency-independent source statics. We have applied this procedure to a set of small events in Southern California. Synthetic seismograms were computed using three types of velocity models: the 1D Standard Southern California Crustal Model (SoCaL) [Dreger & Helmberger, 1993], the 3D SCEC Community Velocity Model, Version 3.0 (CVM3.0) [Magistrale et al., 2000], and a set of path-averaged 1D models (A1D) extracted from CVM3.0 by horizontally averaging wave slownesses along source-receiver paths. The 3D synthetics were computed using K. Olsen's finite difference code. More than 1000 measurements were made on both P and S waveforms at frequencies ranging from 0.2 to 1 Hz. Overall, the 3D model provided a substantially better fit to the waveform data than either laterally homogeneous or path-dependent 1D models. Relative to SoCaL, CVM3.0 provided a variance reduction of about 64% in δ τ p, and 41% in δ τ q. Relative to A1D, the variance reduction is about 46% and 20%, respectively. The same set of measurements can be employed to invert for both seismic source properties and seismic velocity structures. Fully numerical methods are being developed to compute the Fréchet kernels for these measurements [L. Zhao et. al., this meeting]. This methodology thus provides a unified framework for regional studies of seismic sources and Earth structure in Southern California and elsewhere.
A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony.
Zhang, J W; Rangan, A V
2015-04-01
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.
1993-01-01
An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
A dynamic modelling framework towards the solution of reduction in smoking prevalence
NASA Astrophysics Data System (ADS)
Halim, Tisya Farida Abdul; Sapiri, Hasimah; Abidin, Norhaslinda Zainal
2016-10-01
This paper presents a hypothetical framework towards the solution for reduction in smoking prevalence in Malaysia. The framework is design to assist in decision making process related to reduction in smoking prevalence using SD and OCT. In general, this framework is developed using SD approach where OCT is embedded in the policy evaluation process. Smoking prevalence is one of the determinant which plays an important role in measuring a successful implementation of anti-smoking strategies. Therefore, it is critical to determine the optimal value of smoking prevalence in order to trim down the hazardous effects of smoking to society. Conversely, smoking problem becomes increasingly complex since many issues that ranged from behavioral to economical need to be considered simultaneously. Thus, a hypothetical framework of the control model embedded in the SD methodology is expected to obtain the minimum value of smoking prevalence which the output in turn will provide a guideline for tobacco researchers as well as decision makers for policy design and evaluation.
Using a relative health indicator (RHI) metric to estimate health risk reductions in drinking water.
Alfredo, Katherine A; Seidel, Chad; Ghosh, Amlan; Roberson, J Alan
2017-03-01
When a new drinking water regulation is being developed, the USEPA conducts a health risk reduction and cost analysis to, in part, estimate quantifiable and non-quantifiable cost and benefits of the various regulatory alternatives. Numerous methodologies are available for cumulative risk assessment ranging from primarily qualitative to primarily quantitative. This research developed a summary metric of relative cumulative health impacts resulting from drinking water, the relative health indicator (RHI). An intermediate level of quantification and modeling was chosen, one which retains the concept of an aggregated metric of public health impact and hence allows for comparisons to be made across "cups of water," but avoids the need for development and use of complex models that are beyond the existing state of the science. Using the USEPA Six-Year Review data and available national occurrence surveys of drinking water contaminants, the metric is used to test risk reduction as it pertains to the implementation of the arsenic and uranium maximum contaminant levels and quantify "meaningful" risk reduction. Uranium represented the threshold risk reduction against which national non-compliance risk reduction was compared for arsenic, nitrate, and radium. Arsenic non-compliance is most significant and efforts focused on bringing those non-compliant utilities into compliance with the 10 μg/L maximum contaminant level would meet the threshold for meaningful risk reduction.
NASA Astrophysics Data System (ADS)
Silva, Humberto; Fillpot, Baron S.
2018-01-01
A reduction in both power and electricity usage was determined using a previously validated zero-dimensional energy balance model that implements mitigation strategies used to reduce the urban heat island (UHI) effect. The established model has been applied to show the change in urban characteristic temperature when executing four common mitigation strategies: increasing the overall (1) emissivity, (2) vegetated area, (3) thermal conductivity, and (4) albedo of the urban environment in a series of increases by 5, 10, 15, and 20% from baseline values. Separately, a correlation analysis was performed involving meteorological data and total daily energy (TDE) consumption where the 24-h average temperature was shown to have the greatest correlation to electricity service data in the Phoenix, Arizona, USA, metropolitan region. A methodology was then developed for using the model to predict TDE consumption reduction and corresponding cost-saving analysis when implementing the four mitigation strategies. The four modeled UHI mitigation strategies, taken in combination, would lead to the largest percent reduction in annual energy usage, where increasing the thermal conductivity is the single most effective mitigation strategy. The single least effective mitigation strategy, increasing the emissivity by 5% from the baseline value, resulted in an average calculated reduction of about 1570 GWh in yearly energy usage with a corresponding 157 million dollar cost savings. When the four parameters were increased in unison by 20% from baseline values, an average calculated reduction of about 2050 GWh in yearly energy usage was predicted with a corresponding 205 million dollar cost savings.
CellML metadata standards, associated tools and repositories
Beard, Daniel A.; Britten, Randall; Cooling, Mike T.; Garny, Alan; Halstead, Matt D.B.; Hunter, Peter J.; Lawson, James; Lloyd, Catherine M.; Marsh, Justin; Miller, Andrew; Nickerson, David P.; Nielsen, Poul M.F.; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M.; Yu, Tommy
2009-01-01
The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website. PMID:19380315
NASA Astrophysics Data System (ADS)
Shpotyuk, Ya; Cebulski, J.; Ingram, A.; Shpotyuk, O.
2017-12-01
Methodological possibilities of positron annihilation lifetime (PAL) spectroscopy in application to nanostructurized substances treated within three-term fitting procedure are reconsidered to parameterize their atomic-deficient structural arrangement. In contrast to conventional three-term fitting analysis of the detected PAL spectra based on admixed positron trapping and positronium (Ps) decaying, the nanostructurization due to guest nanoparticles embedded in host matrix is considered as producing modified trapping, which involves conversion between these channels. The developed approach referred to as x3-x2-coupling decomposition algorithm allows estimation free volumes of interfacial voids responsible for positron trapping and bulk lifetimes in nanoparticle-embedded substances. This methodology is validated using experimental data of Chakraverty et al. [Phys. Rev. B71 (2005) 024115] on PAL study of composites formed by guest NiFe2O4 nanocrystals grown in host SiO2 matrix.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Efficacy of radiation safety glasses in interventional radiology.
van Rooijen, Bart D; de Haan, Michiel W; Das, Marco; Arnoldussen, Carsten W K P; de Graaf, R; van Zwam, Wim H; Backes, Walter H; Jeukens, Cécile R L P N
2014-10-01
This study was designed to evaluate the reduction of the eye lens dose when wearing protective eyewear in interventional radiology and to identify conditions that optimize the efficacy of radiation safety glasses. The dose reduction provided by different models of radiation safety glasses was measured on an anthropomorphic phantom head. The influence of the orientation of the phantom head on the dose reduction was studied in detail. The dose reduction in interventional radiological practice was assessed by dose measurements on radiologists wearing either leaded or no glasses or using a ceiling suspended screen. The different models of radiation safety glasses provided a dose reduction in the range of a factor of 7.9-10.0 for frontal exposure of the phantom. The dose reduction was strongly reduced when the head is turned to the side relative to the irradiated volume. The eye closest to the tube was better protected due to side shielding and eyewear curvature. In clinical practice, the mean dose reduction was a factor of 2.1. Using a ceiling suspended lead glass shield resulted in a mean dose reduction of a factor of 5.7. The efficacy of radiation protection glasses depends on the orientation of the operator's head relative to the irradiated volume. Glasses can offer good protection to the eye under clinically relevant conditions. However, the performance in clinical practice in our study was lower than expected. This is likely related to nonoptimized room geometry and training of the staff as well as measurement methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holttinen, Hannele; Kiviluoma, Juha; McCann, John
2015-10-05
This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Durlofsky, L. J.
2016-10-01
A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.
A Virtual Aluminum Reduction Cell
NASA Astrophysics Data System (ADS)
Zhang, Hongliang; Zhou, Chenn Q.; Wu, Bing; Li, Jie
2013-11-01
The most important component in the aluminum industry is the aluminum reduction cell; it has received considerable interests and resources to conduct research to improve its productivity and energy efficiency. The current study focused on the integration of numerical simulation data and virtual reality technology to create a scientifically and practically realistic virtual aluminum reduction cell by presenting complex cell structures and physical-chemical phenomena. The multiphysical field simulation models were first built and solved in ANSYS software (ANSYS Inc., Canonsburg, PA, USA). Then, the methodology of combining the simulation results with virtual reality was introduced, and a virtual aluminum reduction cell was created. The demonstration showed that a computer-based world could be created in which people who are not analysis experts can see the detailed cell structure in a context that they can understand easily. With the application of the virtual aluminum reduction cell, even people who are familiar with aluminum reduction cell operations can gain insights that make it possible to understand the root causes of observed problems and plan design changes in much less time.
Ho, Y C; Norli, I; Alkarkhi, Abbas F M; Morad, N
2009-01-01
The performance of pectin in turbidity reduction and the optimum condition were determined using Response Surface Methodology (RSM). The effect of pH, cation's concentration, and pectin's dosage on flocculating activity and turbidity reduction was investigated at three levels and optimized by using Box-Behnken Design (BBD). Coagulation and flocculation process were assessed with a standard jar test procedure with rapid and slow mixing of a kaolin suspension (aluminium silicate), at 150 rpm and 30 rpm, respectively, in which a cation e.g. Al(3+), acts as coagulant, and pectin acts as the flocculant. In this research, all factors exhibited significant effect on flocculating activity and turbidity reduction. The experimental data and model predictions well agreed. From the 3D response surface graph, maximum flocculating activity and turbidity reduction are in the region of pH greater than 3, cation concentration greater than 0.5 mM, and pectin dosage greater than 20 mg/L, using synthetic turbid wastewater within the range. The flocculating activity for pectin and turbidity reduction in wastewater is at 99%.
Apollo, Seth; Onyango, Maurice S; Ochieng, Aoyi
2016-10-01
Anaerobic digestion (AD) is efficient in organic load removal and bioenergy recovery when applied in treating distillery effluent; however, it is ineffective in colour reduction. In contrast, ultraviolet (UV) photodegradation post-treatment for the AD-treated distillery effluent is effective in colour reduction but has high energy requirement. The effects of operating parameters on bioenergy production and energy demand of photodegradation were modelled using response surface methodology (RSM) with a view of developing a sustainable process in which the biological step could supply energy to the energy-intensive photodegradation step. The organic loading rate (OLRAD) and hydraulic retention time (HRTAD) of the initial biological step were the variables investigated. It was found that the initial biological step removed about 90% of COD and only about 50% colour while photodegradation post-treatment removed 98% of the remaining colour. Maximum bioenergy production of 180.5 kWh/m(3) was achieved. Energy demand of the UV lamp was lowest at low OLRAD irrespective of HRTAD, with values ranging between 87 and 496 kWh/m(3). The bioenergy produced formed 93% of the UV lamp energy demand when the system was operated at OLRAD of 3 kg COD/m(3) d and HRT of 20 days. The presumed carbon dioxide emission reduction when electricity from bioenergy was used to power the UV lamp was 28.8 kg CO2 e/m(3), which could reduce carbon emission by 31% compared to when electricity from the grid was used, leading to environmental conservation.
Yousefzadeh, Samira; Matin, Atiyeh Rajabi; Ahmadi, Ehsan; Sabeti, Zahra; Alimohammadi, Mahmood; Aslani, Hassan; Nabizadeh, Ramin
2018-04-01
One of the most important aspects of environmental issues is the demand for clean and safe water. Meanwhile, disinfection process is one of the most important steps in safe water production. The present study aims at estimating the performance of UV, nano Zero-Valent Iron particles (nZVI, nano-Fe 0 ), and UV treatment with the addition of nZVI (combined process) for Bacillus subtilis spores inactivation. Effects of different factors on inactivation including contact time, initial nZVI concentration, UV irradiance and various aerations conditions were investigated. Response surface methodology, based on a five-level, two variable central composite design, was used to optimize target microorganism reduction and the experimental parameters. The results indicated that the disinfection time had the greatest positive impact on disinfection ability among the different selected independent variables. According to the results, it can be concluded that microbial reduction by UV alone was more effective than nZVI while the combined UV/nZVI process demonstrated the maximum log reduction. The optimum reduction of about 4 logs was observed at 491 mg/L of nZVI and 60 min of contact time when spores were exposed to UV radiation under deaerated condition. Therefore, UV/nZVI process can be suggested as a reliable method for Bacillus subtilis spores inactivation. Copyright © 2018. Published by Elsevier Ltd.
Dangerfield, Emma M; Plunkett, Catherine H; Win-Mason, Anna L; Stocker, Bridget L; Timmer, Mattie S M
2010-08-20
New methodology for the protecting-group-free synthesis of primary amines is presented. By optimizing the metal hydride/ammonia mediated reductive amination of aldehydes and hemiacetals, primary amines were selectively prepared with no or minimal formation of the usual secondary and tertiary amine byproduct. The methodology was performed on a range of functionalized aldehyde substrates, including in situ formed aldehydes from a Vasella reaction. These reductive amination conditions provide a valuable synthetic tool for the selective production of primary amines in fewer steps, in good yields, and without the use of protecting groups.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
Analysis and Reduction of Complex Networks Under Uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less
Reduced order modeling and active flow control of an inlet duct
NASA Astrophysics Data System (ADS)
Ge, Xiaoqing
Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.
Lancelot, Christiane; Thieu, Vincent; Polard, Audrey; Garnier, Josette; Billen, Gilles; Hecq, Walter; Gypens, Nathalie
2011-05-01
Nutrient reduction measures have been already taken by wealthier countries to decrease nutrient loads to coastal waters, in most cases however, prior to having properly assessed their ecological effectiveness and their economic costs. In this paper we describe an original integrated impact assessment methodology to estimate the direct cost and the ecological performance of realistic nutrient reduction options to be applied in the Southern North Sea watershed to decrease eutrophication, visible as Phaeocystis blooms and foam deposits on the beaches. The mathematical tool couples the idealized biogeochemical GIS-based model of the river system (SENEQUE-RIVERSTRAHLER) implemented in the Eastern Channel/Southern North Sea watershed to the biogeochemical MIRO model describing Phaeocystis blooms in the marine domain. Model simulations explore how nutrient reduction options regarding diffuse and/or point sources in the watershed would affect the Phaeocystis colony spreading in the coastal area. The reference and prospective simulations are performed for the year 2000 characterized by mean meteorological conditions, and nutrient reduction scenarios include and compare upgrading of wastewater treatment plants and changes in agricultural practices including an idealized shift towards organic farming. A direct cost assessment is performed for each realistic nutrient reduction scenario. Further the reduction obtained for Phaeocystis blooms is assessed by comparison with ecological indicators (bloom magnitude and duration) and the cost for reducing foam events on the beaches is estimated. Uncertainty brought by the added effect of meteorological conditions (rainfall) on coastal eutrophication is discussed. It is concluded that the reduction obtained by implementing realistic environmental measures on the short-term is costly and insufficient to restore well-balanced nutrient conditions in the coastal area while the replacement of conventional agriculture by organic farming might be an option to consider in the nearby future. Copyright © 2011 Elsevier B.V. All rights reserved.
Mosaddeghi, Mohammad Reza; Pajoum Shariati, Farshid; Vaziri Yazdi, Seyed Ali; Nabi Bidhendi, Gholamreza
2018-06-21
The wastewater produced in a pulp and paper industry is one of the most polluted industrial wastewaters, and therefore its treatment requires complex processes. One of the simple and feasible processes in pulp and paper wastewater treatment is coagulation and flocculation. Overusing a chemical coagulant can produce a large volume of sludge and increase costs and health concerns. Therefore, the use of natural and plant-based coagulants has been recently attracted the attention of researchers. One of the advantages of using Ocimum basilicum as a coagulant is a reduction in the amount of chemical coagulant required. In this study, the effect of basil mucilage has been investigated as a plant-based coagulant together with alum for treatment of paper recycling wastewater. Response surface methodology (RSM) was used to optimize the process of chemical coagulation based on a central composite rotatable design (CCRD). Quadratic models for colour reduction and TSS removal with coefficients of determination of R 2 >96 were obtained using the analysis of variance. Under optimal conditions, removal efficiencies of colour and total suspended solids (TSS) were 85% and 82%, respectively.
Development of South Dakota accident reduction factors
DOT National Transportation Integrated Search
1998-08-01
This report offers the methodology and findings of the first project to develop Accident Reduction Factors (ARFs) and Severity Reduction Ratios (SRRs) for the state of South Dakota. The ARFs and SRRs of this project focused on Hazard Elimination and ...
75 FR 46942 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-04
... employers. Should any needed methodological changes be identified, NIOSH will submit a request for modification to OMB. If no substantive methodological changes are required, the phase II study will proceed and... complete the questionnaire on the web or by telephone at that time.) Assuming no methodological changes...
NASA Astrophysics Data System (ADS)
Ji, Liang-Bo; Chen, Fang
2017-07-01
Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
Mind-life continuity: A qualitative study of conscious experience.
Hipólito, Inês; Martins, Jorge
2017-12-01
There are two fundamental models to understanding the phenomenon of natural life. One is the computational model, which is based on the symbolic thinking paradigm. The other is the biological organism model. The common difficulty attributed to these paradigms is that their reductive tools allow the phenomenological aspects of experience to remain hidden behind yes/no responses (behavioral tests), or brain 'pictures' (neuroimaging). Hence, one of the problems regards how to overcome methodological difficulties towards a non-reductive investigation of conscious experience. It is our aim in this paper to show how cooperation between Eastern and Western traditions may shed light for a non-reductive study of mind and life. This study focuses on the first-person experience associated with cognitive and mental events. We studied phenomenal data as a crucial fact for the domain of living beings, which, we expect, can provide the ground for a subsequent third-person study. The intervention with Jhana meditation, and its qualitative assessment, provided us with experiential profiles based upon subjects' evaluations of their own conscious experiences. The overall results should move towards an integrated or global perspective on mind where neither experience nor external mechanisms have the final word. Copyright © 2017. Published by Elsevier Ltd.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faulds, James
We conducted a comprehensive analysis of the structural controls of geothermal systems within the Great Basin and adjacent regions. Our main objectives were to: 1) Produce a catalogue of favorable structural environments and models for geothermal systems. 2) Improve site-specific targeting of geothermal resources through detailed studies of representative sites, which included innovative techniques of slip tendency analysis of faults and 3D modeling. 3) Compare and contrast the structural controls and models in different tectonic settings. 4) Synthesize data and develop methodologies for enhancement of exploration strategies for conventional and EGS systems, reduction in the risk of drilling non-productive wells,more » and selecting the best EGS sites.« less
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Dexter, Franklin; Abouleish, Amr E; Epstein, Richard H; Whitten, Charles W; Lubarsky, David A
2003-10-01
Potential benefits to reducing turnover times are both quantitative (e.g., complete more cases and reduce staffing costs) and qualitative (e.g., improve professional satisfaction). Analyses have shown the quantitative arguments to be unsound except for reducing staffing costs. We describe a methodology by which each surgical suite can use its own numbers to calculate its individual potential reduction in staffing costs from reducing its turnover times. Calculations estimate optimal allocated operating room (OR) time (based on maximizing OR efficiency) before and after reducing the maximum and average turnover times. At four academic tertiary hospitals, reductions in average turnover times of 3 to 9 min would result in 0.8% to 1.8% reductions in staffing cost. Reductions in average turnover times of 10 to 19 min would result in 2.5% to 4.0% reductions in staffing costs. These reductions in staffing cost are achieved predominantly by reducing allocated OR time, not by reducing the hours that staff work late. Heads of anesthesiology groups often serve on OR committees that are fixated on turnover times. Rather than having to argue based on scientific studies, this methodology provides the ability to show the specific quantitative effects (small decreases in staffing costs and allocated OR time) of reducing turnover time using a surgical suite's own data. Many anesthesiologists work at hospitals where surgeons and/or operating room (OR) committees focus repeatedly on turnover time reduction. We developed a methodology by which the reductions in staffing cost as a result of turnover time reduction can be calculated for each facility using its own data. Staffing cost reductions are generally very small and would be achieved predominantly by reducing allocated OR time to the surgeons.
Song, M K; Kim, H W; Rhee, M S
2016-06-01
We previously reported that a combination of heat and relative humidity (RH) had a marked bactericidal effect on Escherichia coli O157:H7 on radish seeds. Here, response surface methodology with a Box-Behnken design was used to build a model to predict reductions in E. coli O157:H7 populations based on three independent variables: heating temperature (55 °C, 60 °C, or 65 °C), RH (40%, 60%, and 80%), and holding time (8, 15, or 22 h). Optimum treatment conditions were selected using a desirability function. The predictive model for microbial reduction had a high regression coefficient (R(2) = 0.97), and the accuracy of the model was verified using validation data (R(2) = 0.95). Among the three variables examined, heating temperature (P < 0.0001) and RH (P = 0.004) were the most significant in terms of bacterial reduction and seed germination, respectively. The optimum conditions for microbial reduction (6.6 log reduction) determined by ridge analysis were as follows: 64.5 °C and 63.2% RH for 17.7 h. However, when both microbial reduction and germination rate were taken into consideration, the desirability function yielded optimal conditions of 65 °C and 40% RH for 8 h (6.6 log reduction in the bacterial population; 94.4% of seeds germinated). This study provides comprehensive data that improve our understanding of the effects of heating temperature, RH, and holding time on the E. coli O157:H7 population on radish seeds. Radish seeds can be exposed to these conditions before sprouting, which greatly increases the microbiological safety of the products. Copyright © 2015 Elsevier Ltd. All rights reserved.
Integrated cost-effectiveness analysis of agri-environmental measures for water quality.
Balana, Bedru B; Jackson-Blake, Leah; Martin-Ortega, Julia; Dunn, Sarah
2015-09-15
This paper presents an application of integrated methodological approach for identifying cost-effective combinations of agri-environmental measures to achieve water quality targets. The methodological approach involves linking hydro-chemical modelling with economic costs of mitigation measures. The utility of the approach was explored for the River Dee catchment in North East Scotland, examining the cost-effectiveness of mitigation measures for nitrogen (N) and phosphorus (P) pollutants. In-stream nitrate concentration was modelled using the STREAM-N and phosphorus using INCA-P model. Both models were first run for baseline conditions and then their effectiveness for changes in land management was simulated. Costs were based on farm income foregone, capital and operational expenditures. The costs and effects data were integrated using 'Risk Solver Platform' optimization in excel to produce the most cost-effective combination of measures by which target nutrient reductions could be attained at a minimum economic cost. The analysis identified different combination of measures as most cost-effective for the two pollutants. An important aspect of this paper is integration of model-based effectiveness estimates with economic cost of measures for cost-effectiveness analysis of land and water management options. The methodological approach developed is not limited to the two pollutants and the selected agri-environmental measures considered in the paper; the approach can be adapted to the cost-effectiveness analysis of any catchment-scale environmental management options. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling methylene chloride exposure-reduction options for home paint-stripper users.
Riley, D M; Small, M J; Fischhoff, B
2000-01-01
Home improvement is a popular activity, but one that can also involve exposure to hazardous substances. Paint stripping is of particular concern because of the high potential exposures to methylene chloride, a solvent that is a potential human carcinogen and neurotoxicant. This article presents a general methodology for evaluating the effectiveness of behavioral interventions for reducing these risks. It doubles as a model that assesses exposure patterns, incorporating user time-activity patterns and risk-mitigation strategies. The model draws upon recent innovations in indoor air-quality modeling to estimate exposure through inhalation and dermal pathways to paint-stripper users. It is designed to use data gathered from home paint-stripper users about room characteristics, amount of stripper used, time-activity patterns and exposure-reduction strategies (e.g., increased ventilation and modification in the timing of stripper application, scraping, and breaks). Results indicate that the effectiveness of behavioral interventions depends strongly on characteristics of the room (e.g., size, number and size of doors and windows, base air-exchange rates). The greatest simple reduction in exposure is achieved by using an exhaust fan in addition to opening windows and doors. These results can help identify the most important information for product labels and other risk-communication materials.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2015-11-15
Risk management for wastewater treatment and reuse have led to growing interest in understanding and optimising pathogen reduction during biological treatment processes. However, modelling pathogen reduction is often limited by poor characterization of the relationships between variables and incomplete knowledge of removal mechanisms. The aim of this paper was to assess the applicability of Bayesian belief network models to represent associations between pathogen reduction, and operating conditions and monitoring parameters and predict AS performance. Naïve Bayes and semi-naïve Bayes networks were constructed from an activated sludge dataset including operating and monitoring parameters, and removal efficiencies for two pathogens (native Giardia lamblia and seeded Cryptosporidium parvum) and five native microbial indicators (F-RNA bacteriophage, Clostridium perfringens, Escherichia coli, coliforms and enterococci). First we defined the Bayesian network structures for the two pathogen log10 reduction values (LRVs) class nodes discretized into two states (< and ≥ 1 LRV) using two different learning algorithms. Eight metrics, such as Prediction Accuracy (PA) and Area Under the receiver operating Curve (AUC), provided a comparison of model prediction performance, certainty and goodness of fit. This comparison was used to select the optimum models. The optimum Tree Augmented naïve models predicted removal efficiency with high AUC when all system parameters were used simultaneously (AUCs for C. parvum and G. lamblia LRVs of 0.95 and 0.87 respectively). However, metrics for individual system parameters showed only the C. parvum model was reliable. By contrast individual parameters for G. lamblia LRV prediction typically obtained low AUC scores (AUC < 0.81). Useful predictors for C. parvum LRV included solids retention time, turbidity and total coliform LRV. The methodology developed appears applicable for predicting pathogen removal efficiency in water treatment systems generally. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Nagarajan, Adarsh; Chakraborty, Sudipta
This report presents an impact assessment study of distributed photovoltaic (PV) with smart inverter Volt-VAR control on conservation voltage reduction (CVR) energy savings and distribution system power quality. CVR is a methodology of flattening and lowering a distribution system voltage profile in order to conserve energy. Traditional CVR relies on operating utility voltage regulators and switched capacitors. However, with the increased penetration of distributed PV systems, smart inverters provide the new opportunity to control local voltage and power factor by regulating the reactive power output, leading to a potential increase in CVR energy savings. This report proposes a methodology tomore » implement CVR scheme by operating voltage regulators, capacitors, and autonomous smart inverter Volt-VAR control in order to achieve increased CVR benefit. Power quality is an important consideration when operating a distribution system, especially when implementing CVR. It is easy to measure the individual components that make up power quality, but a comprehensive method to incorporate all of these values into a single score has yet to be undertaken. As a result, this report proposes a power quality scoring mechanism to measure the relative power quality of distribution systems using a single number, which is aptly named the 'power quality score' (PQS). Both the CVR and PQS methodologies were applied to two distribution system models, one obtained from the Hawaiian Electric Company (HECO) and another obtained from Pacific Gas and Electric (PG&E). These two models were converted to the OpenDSS platform using previous model conversion tools that were developed by NREL. Multiple scenarios including various PV penetration levels and smart inverter densities were simulated to analyze the impact of smart inverter Volt-VAR support on CVR energy savings and feeder power quality. In order to analyze the CVR benefit and PQS, an annual simulation was conducted for each scenario.« less
NASA Astrophysics Data System (ADS)
Varma, R. A. Raveendra
Magnetic fields of naval vessels are widely used all over the world for detection and localization of naval vessel. Magnetic Anomaly Detectors (MADs) installed on air borne vehicles are used to detect submarine operating in shallow waters. Underwater mines fitted with magnetic sensor are used for detection and destruction of naval vessels in the times of conflict. Reduction of magnetic signature of naval vessels is carried out by deperming and installation of degaussing system onboard the vessel. Present paper elaborates details of studies carried out at Magnetics Division of Naval Science and Technological Laboratory (NSTL) for minimizing the magnetic signature of naval vessels by designing a degaussing system. Magnetic fields of a small ship model are predicted and a degaussing system is designed for reducing magnetic detection. The details of the model, methodology used for estimation of magnetic signature of the vessel and design of degaussing system is brought out in this paper with details of experimental setup and results.
Toward a comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.
Choi, Ickwon; Kattan, Michael W; Wells, Brian J; Yu, Changhong
2012-01-01
In medical society, the prognostic models, which use clinicopathologic features and predict prognosis after a certain treatment, have been externally validated and used in practice. In recent years, most research has focused on high dimensional genomic data and small sample sizes. Since clinically similar but molecularly heterogeneous tumors may produce different clinical outcomes, the combination of clinical and genomic information, which may be complementary, is crucial to improve the quality of prognostic predictions. However, there is a lack of an integrating scheme for clinic-genomic models due to the P ≥ N problem, in particular, for a parsimonious model. We propose a methodology to build a reduced yet accurate integrative model using a hybrid approach based on the Cox regression model, which uses several dimension reduction techniques, L₂ penalized maximum likelihood estimation (PMLE), and resampling methods to tackle the problem. The predictive accuracy of the modeling approach is assessed by several metrics via an independent and thorough scheme to compare competing methods. In breast cancer data studies on a metastasis and death event, we show that the proposed methodology can improve prediction accuracy and build a final model with a hybrid signature that is parsimonious when integrating both types of variables.
Lee, Changju; So, Jaehyun Jason; Ma, Jiaqi
2018-01-02
The conflicts among motorists entering a signalized intersection with the red light indication have become a national safety issue. Because of its sensitivity, efforts have been made to investigate the possible causes and effectiveness of countermeasures using comparison sites and/or before-and-after studies. Nevertheless, these approaches are ineffective when comparison sites cannot be found, or crash data sets are not readily available or not reliable for statistical analysis. Considering the random nature of red light running (RLR) crashes, an inventive approach regardless of data availability is necessary to evaluate the effectiveness of each countermeasure face to face. The aims of this research are to (1) review erstwhile literature related to red light running and traffic safety models; (2) propose a practical methodology for evaluation of RLR countermeasures with a microscopic traffic simulation model and surrogate safety assessment model (SSAM); (3) apply the proposed methodology to actual signalized intersection in Virginia, with the most prevalent scenarios-increasing the yellow signal interval duration, installing an advance warning sign, and an RLR camera; and (4) analyze the relative effectiveness by RLR frequency and the number of conflicts (rear-end and crossing). All scenarios show a reduction in RLR frequency (-7.8, -45.5, and -52.4%, respectively), but only increasing the yellow signal interval duration results in a reduced total number of conflicts (-11.3%; a surrogate safety measure of possible RLR-related crashes). An RLR camera makes the greatest reduction (-60.9%) in crossing conflicts (a surrogate safety measure of possible angle crashes), whereas increasing the yellow signal interval duration results in only a 12.8% reduction of rear-end conflicts (a surrogate safety measure of possible rear-end crash). Although increasing the yellow signal interval duration is advantageous because this reduces the total conflicts (a possibility of total RLR-related crashes), each countermeasure shows different effects by RLR-related conflict types that can be referred to when making a decision. Given that each intersection has different RLR crash issues, evaluated countermeasures are directly applicable to enhance the cost and time effectiveness, according to the situation of the target intersection. In addition, the proposed methodology is replicable at any site that has a dearth of crash data and/or comparison sites in order to test any other countermeasures (both engineering and enforcement countermeasures) for RLR crashes.
Assessing the effects of transboundary ozone pollution between Ontario, Canada and New York, USA.
Brankov, Elvira; Henry, Robert F; Civerolo, Kevin L; Hao, Winston; Rao, S T; Misra, P K; Bloxam, Robert; Reid, Neville
2003-01-01
We investigated the effects of transboundary pollution between Ontario and New York using both observations and modeling results. Analysis of the spatial scales associated with ozone pollution revealed the regional and international character of this pollutant. A back-trajectory-clustering methodology was used to evaluate the potential for transboundary pollution trading and to identify potential pollution source regions for two sites: CN tower in Toronto and the World Trade Center in New York City. Transboundary pollution transport was evident at both locations. The major pollution source areas for the period examined were the Ohio River Valley and Midwest. Finally, we examined the transboundary impact of emission reductions through photochemical models. We found that emissions from both New York and Ontario were transported across the border and that reductions in predicted O3 levels can be substantial when emissions on both sides of the border are reduced.
Revealing the underlying drivers of disaster risk: a global analysis
NASA Astrophysics Data System (ADS)
Peduzzi, Pascal
2017-04-01
Disasters events are perfect examples of compound events. Disaster risk lies at the intersection of several independent components such as hazard, exposure and vulnerability. Understanding the weight of each component requires extensive standardisation. Here, I show how footprints of past disastrous events were generated using GIS modelling techniques and used for extracting population and economic exposures based on distribution models. Using past event losses, it was possible to identify and quantify a wide range of socio-politico-economic drivers associated with human vulnerability. The analysis was applied to about nine thousand individual past disastrous events covering earthquakes, floods and tropical cyclones. Using a multiple regression analysis on these individual events it was possible to quantify each risk component and assess how vulnerability is influenced by various hazard intensities. The results show that hazard intensity, exposure, poverty, governance as well as other underlying factors (e.g. remoteness) can explain the magnitude of past disasters. Analysis was also performed to highlight the role of future trends in population and climate change and how this may impacts exposure to tropical cyclones in the future. GIS models combined with statistical multiple regression analysis provided a powerful methodology to identify, quantify and model disaster risk taking into account its various components. The same methodology can be applied to various types of risk at local to global scale. This method was applied and developed for the Global Risk Analysis of the Global Assessment Report on Disaster Risk Reduction (GAR). It was first applied on mortality risk in GAR 2009 and GAR 2011. New models ranging from global assets exposure and global flood hazard models were also recently developed to improve the resolution of the risk analysis and applied through CAPRA software to provide probabilistic economic risk assessments such as Average Annual Losses (AAL) and Probable Maximum Losses (PML) in GAR 2013 and GAR 2015. In parallel similar methodologies were developed to highlitght the role of ecosystems for Climate Change Adaptation (CCA) and Disaster Risk Reduction (DRR). New developments may include slow hazards (such as e.g. soil degradation and droughts), natech hazards (by intersecting with georeferenced critical infrastructures) The various global hazard, exposure and risk models can be visualized and download through the PREVIEW Global Risk Data Platform.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
1980-08-01
5K 2. METHODOLOGY . . . . . . . . . . . . . . . . . . . . 5 3. RESULTS . . . . . . . . . . . . . . . . . . . . . . 23I 4...2. METHODOLOGY The first step required in this study was to characterize the prone protected posture. Basically, a man in the prone posture differs...reduction in the presented area of target personnel. Reference 6 contains a concise discussion of the methodology used to generate the shielding functions
Identifying the features of an exercise addiction: A Delphi study
Macfarlane, Lucy; Owens, Glynn; Cruz, Borja del Pozo
2016-01-01
Objectives There remains limited consensus regarding the definition and conceptual basis of exercise addiction. An understanding of the factors motivating maintenance of addictive exercise behavior is important for appropriately targeting intervention. The aims of this study were twofold: first, to establish consensus on features of an exercise addiction using Delphi methodology and second, to identify whether these features are congruous with a conceptual model of exercise addiction adapted from the Work Craving Model. Methods A three-round Delphi process explored the views of participants regarding the features of an exercise addiction. The participants were selected from sport and exercise relevant domains, including physicians, physiotherapists, coaches, trainers, and athletes. Suggestions meeting consensus were considered with regard to the proposed conceptual model. Results and discussion Sixty-three items reached consensus. There was concordance of opinion that exercising excessively is an addiction, and therefore it was appropriate to consider the suggestions in light of the addiction-based conceptual model. Statements reaching consensus were consistent with all three components of the model: learned (negative perfectionism), behavioral (obsessive–compulsive drive), and hedonic (self-worth compensation and reduction of negative affect and withdrawal). Conclusions Delphi methodology allowed consensus to be reached regarding the features of an exercise addiction, and these features were consistent with our hypothesized conceptual model of exercise addiction. This study is the first to have applied Delphi methodology to the exercise addiction field, and therefore introduces a novel approach to exercise addiction research that can be used as a template to stimulate future examination using this technique. PMID:27554504
COST OF SELECTIVE CATALYTIC REDUCTION (SCR) APPLICATION FOR NOX CONTROL ON COAL-FIRED BOILERS
The report provides a methodology for estimating budgetary costs associated with retrofit applications of selective catalytic reduction (SCR) technology on coal-fired boilers. SCR is a postcombustion nitrogen oxides (NOx) control technology capable of providing NOx reductions >90...
Integrating risk assessment and life cycle assessment: a case study of insulation.
Nishioka, Yurika; Levy, Jonathan I; Norris, Gregory A; Wilson, Andrew; Hofstetter, Patrick; Spengler, John D
2002-10-01
Increasing residential insulation can decrease energy consumption and provide public health benefits, given changes in emissions from fuel combustion, but also has cost implications and ancillary risks and benefits. Risk assessment or life cycle assessment can be used to calculate the net impacts and determine whether more stringent energy codes or other conservation policies would be warranted, but few analyses have combined the critical elements of both methodologies In this article, we present the first portion of a combined analysis, with the goal of estimating the net public health impacts of increasing residential insulation for new housing from current practice to the latest International Energy Conservation Code (IECC 2000). We model state-by-state residential energy savings and evaluate particulate matter less than 2.5 microm in diameter (PM2.5), NOx, and SO2 emission reductions. We use past dispersion modeling results to estimate reductions in exposure, and we apply concentration-response functions for premature mortality and selected morbidity outcomes using current epidemiological knowledge of effects of PM2.5 (primary and secondary). We find that an insulation policy shift would save 3 x 10(14) British thermal units or BTU (3 x 10(17) J) over a 10-year period, resulting in reduced emissions of 1,000 tons of PM2.5, 30,000 tons of NOx, and 40,000 tons of SO2. These emission reductions yield an estimated 60 fewer fatalities during this period, with the geographic distribution of health benefits differing from the distribution of energy savings because of differences in energy sources, population patterns, and meteorology. We discuss the methodology to be used to integrate life cycle calculations, which can ultimately yield estimates that can be compared with costs to determine the influence of external costs on benefit-cost calculations.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
COST OF SELECTIVE CATALYTIC REDUCTION (SCR) APPLICATION FOR NOX CONTROL ON COAL-FIRED BOILERS
The report provides a methodology for estimating budgetary costs associ-ated with retrofit applications of selec-tive catalytic reduction (SCR) technology on coal-fired boilers. SCR is a post-combustion nitrogen oxides (NOX) con-trol technology capable of providing NOX reductions...
PEM Fuel Cells Redesign Using Biomimetic and TRIZ Design Methodologies
NASA Astrophysics Data System (ADS)
Fung, Keith Kin Kei
Two formal design methodologies, biomimetic design and the Theory of Inventive Problem Solving, TRIZ, were applied to the redesign of a Proton Exchange Membrane (PEM) fuel cell. Proof of concept prototyping was performed on two of the concepts for water management. The liquid water collection with strategically placed wicks concept demonstrated the potential benefits for a fuel cell. Conversely, the periodic flow direction reversal concepts might cause a potential reduction water removal from a fuel cell. The causes of this water removal reduction remain unclear. In additional, three of the concepts generated with biomimetic design were further studied and demonstrated to stimulate more creative ideas in the thermal and water management of fuel cells. The biomimetic design and the TRIZ methodologies were successfully applied to fuel cells and provided different perspectives to the redesign of fuel cells. The methodologies should continue to be used to improve fuel cells.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
The Global Earthquake Model and Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Smolka, A. J.
2015-12-01
Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all projects are undertaken with strong involvement of local scientific and risk reduction communities. Open-source software and careful documentation of the methodologies create full transparency of the modelling process, so that results can be reproduced any time by third parties.
NASA Technical Reports Server (NTRS)
1976-01-01
The methodology used to predict full scale space shuttle solid rocket booster (SRB) water impact loads from scale model test data is described. Tests conducted included 12.5 inch and 120 inch diameter models of the SRB. Geometry and mass characteristics of the models were varied in each test series to reflect the current SRB baseline configuration. Nose first and tail first water entry modes were investigated with full-scale initial impact vertical velocities of 40 to 120 ft/sec, horizontal velocities of 0 to 60 ft/sec., and off-vertical angles of 0 to plus or minus 30 degrees. The test program included a series of tests with scaled atmospheric pressure.
A strategy to optimize CT pediatric dose with a visual discrimination model
NASA Astrophysics Data System (ADS)
Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.
2008-03-01
Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.
Parking, energy consumption and air pollution.
Höglund, Paul G
2004-12-01
This paper examines the impacts of different ways of parking on environmental effects, mainly vehicle emissions and air pollution. Vehicle energy consumption and the urban air quality at street level, related to location and design of parking establishments, need to be assessed and quantified. In addition, the indoor parking environment needs attention. This paper gives a description of a methodological approach when comparing different parking establishments. The paper also briefly describes a Swedish attempt to create methods and models for assessing and quantifying such problem. The models are the macrolevel model BRAHE, for regional traffic exhaust emission, and the micromodel SimPark, a parking search model attempt combined with emission models. Until now, very limited knowledge exists regarding the various aspects of vehicle parking and environmental effects in the technical field as well as in the social and human behaviour aspects. This requires an interdisciplinary approach to this challenging area for research, development and more directly practically implemented surveys and field studies. In order to illustrate the new evaluation methodology, the paper also contains some results from a pilot study in Stockholm. Given certain assumptions, a study of vehicle emissions from parking in an underground garage compared with kerbside parking has given an emission reduction of about 40% in favour of the parking garage. This study has been done using the models mentioned above.
This study develops contingent valuation methods for measuring the benefits of mortality and morbidity drinking water risk reductions. The major effort was devoted to developing and testing a survey instrument to value low-level risk reductions.
2008-12-01
Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1 . AGENCY USE ONLY...on Investment (ROI) of the Zephyr system. This is achieved by ( 1 ) Developing a model to carry out Business Case Analysis (BCA) of JCTDs, including
Tretter, F
2016-08-01
Methodological reflections on pain research and pain therapy focussing on addiction risks are addressed in this article. Starting from the incompleteness of objectification of the purely subjectively fully understandable phenomena of pain and addiction, the relevance of a comprehensive general psychology is underlined. It is shown that that reduction of pain and addiction to a mainly focally arguing neurobiology is only possible if both disciplines have a systemic concept of pain and addiction. With this aim, parallelized conceptual network models are presented.
Combined quantum and molecular mechanics (QM/MM).
Friesner, Richard A
2004-12-01
We describe the current state of the art of mixed quantum mechanics/molecular mechanics (QM/MM) methodology, with a particular focus on modeling of enzymatic reactions. Over the past decade, the effectiveness of these methods has increased dramatically, based on improved quantum chemical methods, advances in the description of the QM/MM interface, and reductions in the cost/performance of computing hardware. Two examples of pharmaceutically relevant applications, cytochrome P450 and class C β-lactamase, are presented.: © 2004 Elsevier Ltd . All rights reserved.
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
Hazard Interactions and Interaction Networks (Cascades) within Multi-Hazard Methodologies
NASA Astrophysics Data System (ADS)
Gill, Joel; Malamud, Bruce D.
2016-04-01
Here we combine research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between 'multi-layer single hazard' approaches and 'multi-hazard' approaches that integrate such interactions. This synthesis suggests that ignoring interactions could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. We proceed to present an enhanced multi-hazard framework, through the following steps: (i) describe and define three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment; (ii) outline three types of interaction relationship (triggering, increased probability, and catalysis/impedance); and (iii) assess the importance of networks of interactions (cascades) through case-study examples (based on literature, field observations and semi-structured interviews). We further propose visualisation frameworks to represent these networks of interactions. Our approach reinforces the importance of integrating interactions between natural hazards, anthropogenic processes and technological hazards/disasters into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential, and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability between successive hazards, and (iii) prioritise resource allocation for mitigation and disaster risk reduction.
Chew, Sook Chin; Tan, Chin Ping; Nyam, Kar Lin
2017-07-01
Kenaf seed oil has been suggested to be used as nutritious edible oil due to its unique fatty acid composition and nutritional value. The objective of this study was to optimize the bleaching parameters of the chemical refining process for kenaf seed oil, namely concentration of bleaching earth (0.5 to 2.5% w/w), temperature (30 to 110 °C) and time (5 to 65 min) based on the responses of total oxidation value (TOTOX) and color reduction using response surface methodology. The results indicated that the corresponding response surface models were highly statistical significant (P < 0.0001) and sufficient to describe and predict TOTOX value and color reduction with R 2 of 0.9713 and 0.9388, respectively. The optimal parameters in the bleaching stage of kenaf seed oil were: 1.5% w/w of the concentration of bleaching earth, temperature of 70 °C, and time of 40 min. These optimum parameters produced bleached kenaf seed oil with TOTOX value of 8.09 and color reduction of 32.95%. There were no significant differences (P > 0.05) between experimental and predicted values, indicating the adequacy of the fitted models. © 2017 Institute of Food Technologists®.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Weather Observation Systems and Efficiency of Fighting Forest Fires
NASA Astrophysics Data System (ADS)
Khabarov, N.; Moltchanova, E.; Obersteiner, M.
2007-12-01
Weather observation is an essential component of modern forest fire management systems. Satellite and in-situ based weather observation systems might help to reduce forest loss, human casualties and destruction of economic capital. In this paper, we develop and apply a methodology to assess the benefits of various weather observation systems on reductions of burned area due to early fire detection. In particular, we consider a model where the air patrolling schedule is determined by a fire hazard index. The index is computed from gridded daily weather data for the area covering parts Spain and Portugal. We conduct a number of simulation experiments. First, the resolution of the original data set is artificially reduced. The reduction of the total forest burned area associated with air patrolling based on a finer weather grid indicates the benefit of using higher spatially resolved weather observations. Second, we consider a stochastic model to simulate forest fires and explore the sensitivity of the model with respect to the quality of input data. The analysis of combination of satellite and ground monitoring reveals potential cost saving due to a "system of systems effect" and substantial reduction in burned area. Finally, we estimate the marginal improvement schedule for loss of life and economic capital as a function of the improved fire observing system.
Cao, Tanfeng; Russell, Robert L; Durbin, Thomas D; Cocker, David R; Burnette, Andrew; Calavita, Joseph; Maldonado, Hector; Johnson, Kent C
2018-04-13
Hybrid engine technology is a potentially important strategy for reduction of tailpipe greenhouse gas (GHG) emissions and other pollutants that is now being implemented for off-road construction equipment. The goal of this study was to evaluate the emissions and fuel consumption impacts of electric-hybrid excavators using a Portable Emissions Measurement System (PEMS)-based methodology. In this study, three hybrid and four conventional excavators were studied for both real world activity patterns and tailpipe emissions. Activity data was obtained using engine control module (ECM) and global positioning system (GPS) logged data, coupled with interviews, historical records, and video. This activity data was used to develop a test cycle with seven modes representing different types of excavator work. Emissions data were collected over this test cycle using a PEMS. The results indicated the HB215 hybrid excavator provided a significant reduction in tailpipe carbon dioxide (CO 2 ) emissions (from -13 to -26%), but increased diesel particulate matter (PM) (+26 to +27%) when compared to a similar model conventional excavator over the same duty cycle. Copyright © 2018 Elsevier B.V. All rights reserved.
Influence of model reduction on uncertainty of flood inundation predictions
NASA Astrophysics Data System (ADS)
Romanowicz, R. J.; Kiczko, A.; Osuch, M.
2012-04-01
Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
This is the first phase of a potentially multi-phase project aimed at identifying scientific methodologies that will lead to the development of innnovative analytical tools supporting the analysis of control strategy effectiveness, namely. accountabilty. Significant reductions i...
Koron, Neža; Bratkič, Arne; Ribeiro Guevara, Sergio; Vahčič, Mitja; Horvat, Milena
2012-01-01
A highly sensitive laboratory methodology for simultaneous determination of methylation and reduction of spiked inorganic mercury (Hg(2+)) in marine water labelled with high specific activity radiotracer ((197)Hg prepared from enriched (196)Hg stable isotope) was developed. A conventional extraction protocol for methylmercury (CH(3)Hg(+)) was modified in order to significantly reduce the partitioning of interfering labelled Hg(2+) into the final extract, thus allowing the detection of as little as 0.1% of the Hg(2+) spike transformed to labelled CH(3)Hg(+). The efficiency of the modified CH(3)Hg(+) extraction procedure was assessed by radiolabelled CH(3)Hg(+) spikes corresponding to concentrations of methylmercury between 0.05 and 4ngL(-1). The recoveries were 73.0±6.0% and 77.5±3.9% for marine and MilliQ water, respectively. The reduction potential was assessed by purging and trapping the radiolabelled elemental Hg in a permanganate solution. The method allows detection of the reduction of as little as 0.001% of labelled Hg(2+) spiked to natural waters. To our knowledge, the optimised methodology is among the most sensitive available to study the Hg methylation and reduction potential, therefore allowing experiments to be done at spikes close to natural levels (1-10ngL(-1)). Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Software for Probabilistic Risk Reduction
NASA Technical Reports Server (NTRS)
Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto
2004-01-01
A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.
NASA Astrophysics Data System (ADS)
Nyboer, John
Issues related to the reduction of greenhouse gases are encumbered with uncertainties for decision makers. Unfortunately, conventional analytical tools generate widely divergent forecasts of the effects of actions designed to mitigate these emissions. "Bottom-up" models show the costs of reducing emissions attained through the penetration of efficient technologies to be low or negative. In contrast, more aggregate "top-down" models show costs of reduction to be high. The methodological approaches of the different models used to simulate energy consumption generate, in part, the divergence found in model outputs. To address this uncertainty and bring convergence, I use a technology-explicit model that simulates turnover of equipment stock as a function of detailed data on equipment costs and stock characteristics and of verified behavioural data related to equipment acquisition and retrofitting. Such detail can inform the decision maker of the effects of actions to reduce greenhouse gases due to changes in (1) technology stocks, (2) products or services, or (3) the mix of fuels used. This thesis involves two main components: (1) the development of a quantitative model to analyse energy demand and (2) the application of this tool to a policy issue, abatement of COsb2 emissions. The analysis covers all of Canada by sector (8 industrial subsectors, residential commercial) and region. An electricity supply model to provide local electricity prices supplemented the quantitative model. Forecasts of growth and structural change were provided by national macroeconomic models. Seven different simulations were applied to each sector in each region including a base case run and three runs simulating emissions charges of 75/tonne, 150/tonne and 225/tonne CO sb2. The analysis reveals that there is significant variation in the costs and quantity of emissions reduction by sector and region. Aggregated results show that Canada can meet both stabilisation targets (1990 levels of emissions by 2000) and reduction targets (20% less than 1990 by 2010), but the cost of meeting reduction targets exceeds 225/tonne. After a review of the results, I provide several reasons for concluding that the costs are overestimated and the emissions reduction underestimated. I also provide several future research options.
Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2014-01-01
Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697
Platt, Tyson L; Zachar, Peter; Ray, Glen E; Lobello, Steven G; Underhill, Andrea T
2007-04-01
Studies have found that Wechsler scale administration and scoring proficiency is not easily attained during graduate training. These findings may be related to methodological issues. Using a single-group repeated measures design, this study documents statistically significant, though modest, error reduction on the WAIS-III and WISC-III during a graduate course in assessment. The study design does not permit the isolation of training factors related to error reduction, or assessment of whether error reduction is a function of mere practice. However, the results do indicate that previous study findings of no or inconsistent improvement in scoring proficiency may have been the result of methodological factors. Implications for teaching individual intelligence testing and further research are discussed.
76 FR 38654 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-01
... Project Fetal-Infant Mortality Review: Human Immunodeficiency Virus Prevention Methodology (FHPM)--New... Mortality Review: Human Immunodeficiency Virus Prevention Methodology (FHPM) is designed to identify and... investigation and improvement strategy. In order to address perinatal HIV transmission at the community level...
Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M
2009-11-01
This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.
Airport emissions quantification: Impacts of electrification. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geba, V.
1998-07-01
Four airports were assessed to demonstrate that electrification of economically viable air- and land-side vehicles and equipment can significantly reduce total airport emissions. Assessments were made using the FAA`s Emissions and Dispersion Modeling System and EPRI Airport Electrification Project data. Development and implementation of cost-effective airport emissions reduction strategies can be complex, requiring successful collaboration of local, state, and federal regulatory agencies with airport authorities. The methodology developed in this study helps to simplify this task. The objectives of this study were: to develop a methodology to quantify annual emissions at US airports from all sources--aircraft, vehicles, and infrastructure; andmore » to demonstrate that electrification of economically viable air- and land-side vehicles and equipment can significantly reduce total airport emissions on-site, even when allowing for emissions from the generation of electricity.« less
Perez Beltran, Saul; Balbuena, Perla B
2018-02-12
A newly designed sulfur/graphene computational model emulates the electrochemical behavior of a Li-S battery cathode, promoting the S-C interaction through the edges of graphene sheets. A random mixture of eight-membered sulfur rings mixed with small graphene sheets is simulated at 64 wt %sulfur loading. Structural stabilization and sulfur reduction calculations are performed with classical reactive molecular dynamics. This methodology allowed the collective behavior of the sulfur and graphene structures to be accounted for. The sulfur encapsulation induces ring opening and the sulfur phase evolves into a distribution of small chain-like structures interacting with C through the graphene edges. This new arrangement of the sulfur phase not only leads to a less pronounced volume expansion during sulfur reduction but also to a different discharge voltage profile, in qualitative agreement with earlier reports on sulfur encapsulation in microporous carbon structures. The Li 2 S phase grows around ensembles of parallel graphene nanosheets during sulfur reduction. No diffusion of sulfur or lithium between graphene nanosheets is observed, and extended Li 2 S domains bridging the space between carbon ensembles are suppressed. The results emphasize the importance of morphology on the electrochemical performance of the composite material. The sulfur/graphene model outlined here provides new understanding of the graphene effects on the sulfur reduction behavior and the role that van der Waals interactions may play in promoting formation of multilayer graphene ensembles and small Li 2 S domains during sulfur reduction. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A framework for quantifying net benefits of alternative prognostic models.
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-30
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.
Control Oriented Modeling and Validation of Aeroservoelastic Systems
NASA Technical Reports Server (NTRS)
Crowder, Marianne; deCallafon, Raymond (Principal Investigator)
2002-01-01
Lightweight aircraft design emphasizes the reduction of structural weight to maximize aircraft efficiency and agility at the cost of increasing the likelihood of structural dynamic instabilities. To ensure flight safety, extensive flight testing and active structural servo control strategies are required to explore and expand the boundary of the flight envelope. Aeroservoelastic (ASE) models can provide online flight monitoring of dynamic instabilities to reduce flight time testing and increase flight safety. The success of ASE models is determined by the ability to take into account varying flight conditions and the possibility to perform flight monitoring under the presence of active structural servo control strategies. In this continued study, these aspects are addressed by developing specific methodologies and algorithms for control relevant robust identification and model validation of aeroservoelastic structures. The closed-loop model robust identification and model validation are based on a fractional model approach where the model uncertainties are characterized in a closed-loop relevant way.
The HEC RAS model of regulated stream for purposes of flood risk reduction
NASA Astrophysics Data System (ADS)
Fijko, Rastislav; Zeleňáková, Martina
2016-06-01
The work highlights the modeling of water flow in open channels using 1D mathematical model HEC-RAS in the area of interest Lopuchov village in eastern Slovakia. We created a digital model from a geodetic survey, which was used to show the area of inundation in ArcGIS software. We point out the modeling methodology with emphasis to collection of the data and their relevance for determination of boundary conditions in 3D model of the study area in GIS platform. The BIM objects can be exported to the defined model of the area. The obtained results were used for simulation of flooding. The results give to us clearly and distinctly defined areas of inundation, which we used in the processing of Cost benefit analysis. We used the developed model for stating the potential damages in flood vulnerable areas.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Causal Interpretations of Psychological Attributes
ERIC Educational Resources Information Center
Kane, Mike
2017-01-01
In the article "Rethinking Traditional Methods of Survey Validation" Andrew Maul describes a minimalist validation methodology for survey instruments, which he suggests is widely used in some areas of psychology and then critiques this methodology empirically and conceptually. He provides a reduction ad absurdum argument by showing that…
Pentosan Polysulfate: Oral Versus Subcutaneous Injection in Mucopolysaccharidosis Type I Dogs
Simonaro, Calogera M.; Tomatsu, Shunji; Sikora, Tracy; Kubaski, Francyne; Frohbergh, Michael; Guevara, Johana M.; Wang, Raymond Y.; Vera, Moin; Kang, Jennifer L.; Smith, Lachlan J.; Schuchman, Edward H.; Haskins, Mark E.
2016-01-01
Background We previously demonstrated the therapeutic benefits of pentosan polysulfate (PPS) in a rat model of mucopolysaccharidosis (MPS) type VI. Reduction of inflammation, reduction of glycosaminoglycan (GAG) storage, and improvement in the skeletal phenotype were shown. Herein, we evaluate the long-term safety and therapeutic effects of PPS in a large animal model of a different MPS type, MPS I dogs. We focused on the arterial phenotype since this is one of the most consistent and clinically significant features of the model. Methodology/Principal Findings MPS I dogs were treated with daily oral or biweekly subcutaneous (subQ) PPS at a human equivalent dose of 1.6 mg/kg for 17 and 12 months, respectively. Safety parameters were assessed at 6 months and at the end of the study. Following treatment, cytokine and GAG levels were determined in fluids and tissues. Assessments of the aorta and carotid arteries also were performed. No drug-related increases in liver enzymes, coagulation factors, or other adverse effects were observed. Significantly reduced IL-8 and TNF-alpha were found in urine and cerebrospinal fluid (CSF). GAG reduction was observed in urine and tissues. Increases in the luminal openings and reduction of the intimal media thickening occurred in the carotids and aortas of PPS-treated animals, along with a reduction of storage vacuoles. These results were correlated with a reduction of GAG storage, reduction of clusterin 1 staining, and improved elastin integrity. No significant changes in the spines of the treated animals were observed. Conclusions PPS treatment led to reductions of pro-inflammatory cytokines and GAG storage in urine and tissues of MPS I dogs, which were most evident after subQ administration. SubQ administration also led to significant cytokine reductions in the CSF. Both treatment groups exhibited markedly reduced carotid and aortic inflammation, increased vessel integrity, and improved histopathology. We conclude that PPS may be a safe and useful therapy for MPS I, either as an adjunct or as a stand-alone treatment that reduces inflammation and GAG storage. PMID:27064989
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Advanced Fluid Reduced Order Models for Compressible Flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly themore » POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.« less
Methodology for dynamic biaxial tension testing of pregnant uterine tissue.
Manoogian, Sarah; Mcnally, Craig; Calloway, Britt; Duma, Stefan
2007-01-01
Placental abruption accounts for 50% to 70% of fetal losses in motor vehicle crashes. Since automobile crashes are the leading cause of traumatic fetal injury mortality in the United States, research of this injury mechanism is important. Before research can adequately evaluate current and future restraint designs, a detailed model of the pregnant uterine tissues is necessary. The purpose of this study is to develop a methodology for testing the pregnant uterus in biaxial tension at a rate normally seen in a motor vehicle crash. Since the majority of previous biaxial work has established methods for quasi-static testing, this paper combines previous research and new methods to develop a custom designed system to strain the tissue at a dynamic rate. Load cells and optical markers are used for calculating stress strain curves of the perpendicular loading axes. Results for this methodology show images of a tissue specimen loaded and a finite verification of the optical strain measurement. The biaxial test system dynamically pulls the tissue to failure with synchronous motion of four tissue grips that are rigidly coupled to the tissue specimen. The test device models in situ loading conditions of the pregnant uterus and overcomes previous limitations of biaxial testing. A non-contact method of measuring strains combined with data reduction to resolve the stresses in two directions provides the information necessary to develop a three dimensional constitutive model of the material. Moreover, future research can apply this method to other soft tissues with similar in situ loading conditions.
Towards the unification of inference structures in medical diagnostic tasks.
Mira, J; Rives, J; Delgado, A E; Martínez, R
1998-01-01
The central purpose of artificial intelligence applied to medicine is to develop models for diagnosis and therapy planning at the knowledge level, in the Newell sense, and software environments to facilitate the reduction of these models to the symbol level. The usual methodology (KADS, Common-KADS, GAMES, HELIOS, Protégé, etc) has been to develop libraries of generic tasks and reusable problem-solving methods with explicit ontologies. The principal problem which clinicians have with these methodological developments concerns the diversity and complexity of new terms whose meaning is not sufficiently clear, precise, unambiguous and consensual for them to be accessible in the daily clinical environment. As a contribution to the solution of this problem, we develop in this article the conjecture that one inference structure is enough to describe the set of analysis tasks associated with medical diagnoses. To this end, we first propose a modification of the systematic diagnostic inference scheme to obtain an analysis generic task and then compare it with the monitoring and the heuristic classification task inference schemes using as comparison criteria the compatibility of domain roles (data structures), the similarity in the inferences, and the commonality in the set of assumptions which underlie the functionally equivalent models. The equivalences proposed are illustrated with several examples. Note that though our ongoing work aims to simplify the methodology and to increase the precision of the terms used, the proposal presented here should be viewed more in the nature of a conjecture.
Eco-hydrological Modeling in the Framework of Climate Change
NASA Astrophysics Data System (ADS)
Fatichi, Simone; Ivanov, Valeriy Y.; Caporali, Enrica
2010-05-01
A blueprint methodology for studying climate change impacts, as inferred from climate models, on eco-hydrological dynamics at the plot and small catchment scale is presented. Input hydro-meteorological variables for hydrological and eco-hydrological models for present and future climates are reproduced using a stochastic downscaling technique and a weather generator, "AWE-GEN". The generated time series of meteorological variables for the present climate and an ensemble of possible future climates serve as input to a newly developed physically-based eco-hydrological model "Tethys-Chloris". An application of the proposed methodology is realized reproducing the current (1961-2000) and multiple future (2081-2100) climates for the location of Tucson (Arizona). A general reduction of precipitation and a significant increase of air temperature are inferred. The eco-hydrological model is successively applied to detect changes in water recharge and vegetation dynamics for a desert shrub ecosystem, typical of the semi-arid climate of south Arizona. Results for the future climate account for uncertainties in the downscaling and are produced in terms of probability density functions. A comparison of control and future scenarios is discussed in terms of changes in the hydrological balance components, energy fluxes, and indices of vegetation productivity. An appreciable effect of climate change can be observed in metrics of vegetation performance. The negative impact on vegetation due to amplification of water stress in a warmer and dryer climate is offset by a positive effect of carbon dioxide augment. This implies a positive shift in plant capabilities to exploit water. Consequently, the plant water use efficiency and rain use efficiency are expected to increase. Interesting differences in the long-term vegetation productivity are also observed for the ensemble of future climates. The reduction of precipitation and the substantial maintenance of vegetation cover ultimately leads to the depletion of soil moisture and recharge to deeper layers. Such an outcome can affect the long-tem water availability in semi-arid systems and expose plants to more severe and frequent periods of stress.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Thermal Profiling of Residential Energy Use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, A; Rajagopal, R
This work describes a methodology for informing targeted demand-response (DR) and marketing programs that focus on the temperature-sensitive part of residential electricity demand. Our methodology uses data that is becoming readily available at utility companies-hourly energy consumption readings collected from "smart" electricity meters, as well as hourly temperature readings. To decompose individual consumption into a thermal-sensitive part and a base load (non-thermally-sensitive), we propose a model of temperature response that is based on thermal regimes, i.e., unobserved decisions of consumers to use their heating or cooling appliances. We use this model to extract useful benchmarks that compose thermal profiles ofmore » individual users, i.e., terse characterizations of the statistics of these users' temperature-sensitive consumption. We present example profiles generated using our model on real consumers, and show its performance on a large sample of residential users. This knowledge may, in turn, inform the DR program by allowing scarce operational and marketing budgets to be spent on the right users-those whose influencing will yield highest energy reductions-at the right time. We show that such segmentation and targeting of users may offer savings exceeding 100% of a random strategy.« less
A validated methodology for determination of laboratory instrument computer interface efficacy
NASA Astrophysics Data System (ADS)
1984-12-01
This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.
Mexico City Air Quality Research Initiative; Volume 5, Strategic evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-03-01
Members of the Task HI (Strategic Evaluation) team were responsible for the development of a methodology to evaluate policies designed to alleviate air pollution in Mexico City. This methodology utilizes information from various reports that examined ways to reduce pollutant emissions, results from models that calculate the improvement in air quality due to a reduction in pollutant emissions, and the opinions of experts as to the requirements and trade-offs that are involved in developing a program to address the air pollution problem in Mexico City. The methodology combines these data to produce comparisons between different approaches to improving Mexico City`smore » air quality. These comparisons take into account not only objective factors such as the air quality improvement or cost of the different approaches, but also subjective factors such as public acceptance or political attractiveness of the different approaches. The end result of the process is a ranking of the different approaches and, more importantly, the process provides insights into the implications of implementing a particular approach or policy.« less
Vilaprinyo, Ester; Puig, Teresa; Rue, Montserrat
2012-01-01
Background Reductions in breast cancer (BC) mortality in Western countries have been attributed to the use of screening mammography and adjuvant treatments. The goal of this work was to analyze the contributions of both interventions to the decrease in BC mortality between 1975 and 2008 in Catalonia. Methodology/Principal Findings A stochastic model was used to quantify the contribution of each intervention. Age standardized BC mortality rates for calendar years 1975–2008 were estimated in four hypothetical scenarios: 1) Only screening, 2) Only adjuvant treatment, 3) Both interventions, and 4) No intervention. For the 30–69 age group, observed Catalan BC mortality rates per 100,000 women-year rose from 29.4 in 1975 to 38.3 in 1993, and afterwards continuously decreased to 23.2 in 2008. If neither of the two interventions had been used, in 2008 the estimated BC mortality would have been 43.5, which, compared to the observed BC mortality rate, indicates a 46.7% reduction. In 2008 the reduction attributable to screening was 20.4%, to adjuvant treatments was 15.8% and to both interventions 34.1%. Conclusions/Significance Screening and adjuvant treatments similarly contributed to reducing BC mortality in Catalonia. Mathematical models have been useful to assess the impact of interventions addressed to reduce BC mortality that occurred over nearly the same periods. PMID:22272292
A POLLUTION REDUCTION METHODOLOGY FOR CHEMICAL PROCESS SIMULATORS
A pollution minimization methodology was developed for chemical process design using computer simulation. It is based on a pollution balance that at steady state is used to define a pollution index with units of mass of pollution per mass of products. The pollution balance has be...
A framework for quantifying net benefits of alternative prognostic models‡
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-01
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066
NASA Technical Reports Server (NTRS)
Stephens, Craig A.
2009-01-01
NASA HYP M&S is pursuing the development of SITPS: 1) Working with HYP MDAO to formulate methodology to incorporate SITPS into hypersonic vehicle design trades. 2) SITPS-0 to SITPS-1 (FY10): a) Manufacturing development and weight reduction (5.8 to 3.1 lb(sub m)/sq ft); b) Structural testing to mature SITPS model. 3) SITPS-2 (FY11): a) Focus on panel closeout, panel-to-panel load transfer, and panel curvature. 4) Extend fabrication technology to include alternate cores and insulations (FY12).
NASA Astrophysics Data System (ADS)
Ju, Weimin; Gao, Ping; Wang, Jun; Li, Xianfeng; Chen, Shu
2008-10-01
Soil water content (SWC) is an important factor affecting photosynthesis, growth, and final yields of crops. The information on SWC is of importance for mitigating the reduction of crop yields caused by drought through proper agricultural water management. A variety of methodologies have been developed to estimate SWC at local and regional scales, including field sampling, remote sensing monitoring and model simulations. The reliability of regional SWC simulation depends largely on the accuracy of spatial input datasets, including vegetation parameters, soil and meteorological data. Remote sensing has been proved to be an effective technique for controlling uncertainties in vegetation parameters. In this study, the vegetation parameters (leaf area index and land cover type) derived from the Moderate Resolution Imaging Spectrometer (MODIS) were assimilated into a process-based ecosystem model BEPS for simulating the variations of SWC in croplands of Jiangsu province, China. Validation shows that the BEPS model is able to capture 81% and 83% of across-site variations of SWC at 10 and 20 cm depths during the period from September to December, 2006 when a serous autumn drought occurred. The simulated SWC responded the events of rainfall well at regional scale, demonstrating the usefulness of our methodology for SWC and practical agricultural water management at large scales.
Janke, Leandro; Lima, André O S; Millet, Maurice; Radetski, Claudemir M
2013-01-01
In Brazil, Solid Waste Disposal Sites have operated without consideration of environmental criteria, these areas being characterized by methane (CH4) emissions during the anaerobic degradation of organic matter. The United Nations organization has made efforts to control this situation, through the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol, where projects that seek to reduce the emissions of greenhouse gases (GHG) can be financially rewarded through Certified Emission Reductions (CERs) if they respect the requirements established by the Clean Development Mechanism (CDM), such as the use of methodologies approved by the CDM Executive Board (CDM-EB). Thus, a methodology was developed according to the CDM standards related to the aeration, excavation and composting of closed Municipal Solid Waste (MSW) landfills, which was submitted to CDM-EB for assessment and, after its approval, applied to a real case study in Maringá City (Brazil) with a view to avoiding negative environmental impacts due the production of methane and leachates even after its closure. This paper describes the establishment of this CDM-EB-approved methodology to determine baseline emissions, project emissions and the resultant emission reductions with the application of appropriate aeration, excavation and composting practices at closed MSW landfills. A further result obtained through the application of the methodology in the landfill case study was that it would be possible to achieve an ex-ante emission reduction of 74,013 tCO2 equivalent if the proposed CDM project activity were implemented.
Economic total maximum daily load for watershed-based pollutant trading.
Zaidi, A Z; deMonsabert, S M
2015-04-01
Water quality trading (WQT) is supported by the US Environmental Protection Agency (USEPA) under the framework of its total maximum daily load (TMDL) program. An innovative approach is presented in this paper that proposes post-TMDL trade by calculating pollutant rights for each pollutant source within a watershed. Several water quality trading programs are currently operating in the USA with an objective to achieve overall pollutant reduction impacts that are equivalent or better than TMDL scenarios. These programs use trading ratios for establishing water quality equivalence among pollutant reductions. The inbuilt uncertainty in modeling the effects of pollutants in a watershed from both the point and nonpoint sources on receiving waterbodies makes WQT very difficult. A higher trading ratio carries with it increased mitigation costs, but cannot ensure the attainment of the required water quality with certainty. The selection of an applicable trading ratio, therefore, is not a simple process. The proposed approach uses an Economic TMDL optimization model that determines an economic pollutant reduction scenario that can be compared with actual TMDL allocations to calculate selling/purchasing rights for each contributing source. The methodology is presented using the established TMDLs for the bacteria (fecal coliform) impaired Muddy Creek subwatershed WAR1 in Rockingham County, Virginia, USA. Case study results show that an environmentally and economically superior trading scenario can be realized by using Economic TMDL model or any similar model that considers the cost of TMDL allocations.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
2016-01-01
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864
Report #13-P-0430, September 24, 2013. The two Region 8 program offices that jointly implement the Lead Renovation, Repair and Painting Program do not have methodology or agreement for sharing SEE funding, which has led to confusion.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-29
...; Comment Request; Environmental Science Formative Research Methodology Studies for the National Children's Study SUMMARY: In compliance with the requirement of Section 3506(c)(2)(A) of the Paperwork Reduction... comment was received. The comment questioned the cost and utility of the study specifically and of...
Rapid Prototyping Methodology in Action: A Developmental Study.
ERIC Educational Resources Information Center
Jones, Toni Stokes; Richey, Rita C.
2000-01-01
Investigated the use of rapid prototyping methodologies in two projects conducted in a natural work setting to determine the nature of its use by designers and customers and the extent to which its use enhances traditional instructional design. Discusses design and development cycle-time reduction, product quality, and customer and designer…
76 FR 39876 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
... Survey--Pretest of Proposed Questions and Methodology.'' In accordance with the Paperwork Reduction Act... Health Plan Survey-- Pretest of Proposed Questions and Methodology The Consumer Assessment of Healthcare... year to year. The CAHPS[supreg] program was designed to: Make it possible to compare survey results...
76 FR 57046 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
... Survey--Pretest of Proposed Questions and Methodology.'' In accordance with the Paperwork Reduction Act... Health Plan Survey-- Pretest of Proposed Questions and Methodology The Consumer Assessment of Healthcare... often changed from year to year. The CAHPS[reg] program was designed to: Make it possible to compare...
Salud Para Su Corazon (Health for Your Heart) Community Health Worker Model
Balcazar, H.; Alvarado, M.; Ortiz, G.
2012-01-01
This article describes 6 Salud Para Su Corazon (SPSC) family of programs that have addressed cardiovascular disease risk reduction in Hispanic communities facilitated by community health workers (CHWs) or Promotores de Salud (PS). A synopsis of the programs illustrates the designs and methodological approaches that combine community-based participatory research for 2 types of settings: community and clinical. Examples are provided as to how CHWs can serve as agents of change in these settings. A description is presented of a sustainability framework for the SPSC family of programs. Finally, implications are summarized for utilizing the SPSC CHW/PS model to inform ambulatory care management and policy. PMID:21914992
Generalized causal mediation and path analysis: Extensions and practical considerations.
Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra
2018-01-01
Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.
Wiedmann, Thomas O; Suh, Sangwon; Feng, Kuishuang; Lenzen, Manfred; Acquaye, Adolf; Scott, Kate; Barrett, John R
2011-07-01
Future energy technologies will be key for a successful reduction of man-made greenhouse gas emissions. With demand for electricity projected to increase significantly in the future, climate policy goals of limiting the effects of global atmospheric warming can only be achieved if power generation processes are profoundly decarbonized. Energy models, however, have ignored the fact that upstream emissions are associated with any energy technology. In this work we explore methodological options for hybrid life cycle assessment (hybrid LCA) to account for the indirect greenhouse gas (GHG) emissions of energy technologies using wind power generation in the UK as a case study. We develop and compare two different approaches using a multiregion input-output modeling framework - Input-Output-based Hybrid LCA and Integrated Hybrid LCA. The latter utilizes the full-sized Ecoinvent process database. We discuss significance and reliability of the results and suggest ways to improve the accuracy of the calculations. The comparison of hybrid LCA methodologies provides valuable insight into the availability and robustness of approaches for informing energy and environmental policy.
Goga, Joshana K; Depaolo, Antonio; Khushalani, Sunil; Walters, J Ken; Roca, Robert; Zisselman, Marc; Borleis, Christopher
2017-01-01
To Evaluate the Effects of Applying Lean Methodology-Improving Quality Increasing Efficiency by Eliminating Waste and Reducing Costs-An Approach To Decrease the Prescribing Frequency of Antipsychotics for The Indication of Agitation. Historically Controlled Study. Bheppard Pratt Health System is the Largest Private Provider of Psychiatric Care in Maryland With a Total Bed Capacity of 300. There Were 4 337 Patient Days From November 1 2012 to October 31 2013 on the Dementia Unit. All Patients Admitted on the Dementia Unit Were 65 Years of Age and Older with a Primary Diagnosis of Dementia. our Multidisciplinary Team Used Lean Methodology to Identify the Root Causes and Interventions Necessary to Reduce Inappropriate Antipsychotic Use. The Primary Outcome Was Rate of Inappropriately Indicating Agitation as the Rationale When Prescribing Antipsychotic Medications. There Was a 90% (P < 0.001) Reduction in Rate Of Antipsychotic Prescribing with an Indication of Agitation. The Lean Methodology Interventions Led To A 90% (P < 0.001) Reduction in the Rate of Antipsychotic Prescribing with an Indication of Agitation and a 10% Rate Reduction in Overall Antipsychotic Prescribing. Key Words: Agitation Alzheimer's Antipsychotics Behavioral and Psychological Symptoms of Dementia Centers For Medicare & Medicaid Services Dementia Root-cause Analysis. BPSD = Behavioral and Psychological Symptoms of Dementia CATIE-AD = Clinical Antipsychotic Trials of Intervention Effectiveness in Alzheimer's Disease EMR = Electronic Medical Records GAO = Government Accountability Office GNCIS = Geriatric Neuropsychiatric Clinical Indicator Scale.
Qiu, Li; Wang, Xiao; Zhao, Na; Xu, Shiliang; An, Zengjian; Zhuang, Xuhui; Lan, Zhenggang; Wen, Lirong; Wan, Xiaobo
2014-12-05
A newly developed reductive ring closure methodology to heteroacenes bearing a dihydropyrrolo[3,2-b]pyrrole core was systematically studied for its scope and limitation. The methodology involves (i) the cyclization of an o-aminobenzoic acid ester derivative to give an eight-membered cyclic dilactam, and (ii) the conversion of the dilactams into the corresponding diimidoyl chloride, which undergoes (iii) reductive ring closure to install the dihydropyrrolo[3,2-b]pyrrole core. The first step of the methodology plays the key role due to its substrate limitation, which suffers from the competition of oligomerization and hydrolysis. All the dilactams could successfully convert to the corresponding diimidoyl chlorides, most of which succeeded to give the dihydropyrrolo[3,2-b]pyrrole core. The influence of the substituents and the elongation of conjugated length on the photophysical properties of the obtained heteroacenes were then investigated systematically using UV-vis spectroscopy and cyclic voltammetry. It was found that chlorination and fluorination had quite a different effect on the photophysical properties of the heteroacene, and the ring fusing pattern also had a drastic influence on the band gap of the heteroacene. The successful preparation of a series of heteroacenes bearing a dihydropyrrolo[3,2-b]pyrrole core would provide a wide variety of candidates for further fabrication of organic field-effect transistor devices.
Data Quality Assurance for Supersonic Jet Noise Measurements
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Henderson, Brenda S.; Bridges, James E.
2010-01-01
The noise created by a supersonic aircraft is a primary concern in the design of future high-speed planes. The jet noise reduction technologies required on these aircraft will be developed using scale-models mounted to experimental jet rigs designed to simulate the exhaust gases from a full-scale jet engine. The jet noise data collected in these experiments must accurately predict the noise levels produced by the full-scale hardware in order to be a useful development tool. A methodology has been adopted at the NASA Glenn Research Center s Aero-Acoustic Propulsion Laboratory to insure the quality of the supersonic jet noise data acquired from the facility s High Flow Jet Exit Rig so that it can be used to develop future nozzle technologies that reduce supersonic jet noise. The methodology relies on mitigating extraneous noise sources, examining the impact of measurement location on the acoustic results, and investigating the facility independence of the measurements. The methodology is documented here as a basis for validating future improvements and its limitations are noted so that they do not affect the data analysis. Maintaining a high quality jet noise laboratory is an ongoing process. By carefully examining the data produced and continually following this methodology, data quality can be maintained and improved over time.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Niederman, Richard
2014-09-01
The Cochrane Oral Health Group's Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Medline, Embase, CINAHL, National Institutes of Health Trials Register and the WHO Clinical Trials Registry Platform for ongoing trials. Reference lists of identified articles were also scanned for relevant papers. Identified manufacturers were contacted for additional information. Only randomised controlled trials comparing manual and powered toothbrushes were considered. Crossover trials were eligible for inclusion if the wash-out period length was more than two weeks. Study assessment and data extraction were carried out independently by at least two reviewers. The primary outcome measures were quantified levels of plaque or gingivitis. Risk of bias assessment was undertaken. Standard Cochrane methodological approaches were taken. Random-effects models were used provided there were four or more studies included in the meta-analysis, otherwise fixed-effect models were used. Data were classed as short term (one to three months) and long term (greater than three months). Fifty-six trials were included with 51 (4624 patients) providing data for meta-analysis. The majority (46) were at unclear risk of bias, five at high risk of bias and five at low risk. There was moderate quality evidence that powered toothbrushes provide a statistically significant benefit compared with manual toothbrushes with regard to the reduction of plaque in both the short and long-term. This corresponds to an 11% reduction in plaque for the Quigley Hein index (Turesky) in the short term and a 21% reduction in the long term. There was a high degree of heterogeneity that was not explained by the different powered toothbrush type subgroups.There was also moderate quality evidence that powered toothbrushes again provide a statistically significant reduction in gingivitis when compared with manual toothbrushes both in the short and long term. This corresponds to a 6% and 11% reduction in gingivitis for the Löe and Silness indices respectively. Again there was a high degree of heterogeneity that was not explained by the different powered toothbrush type subgroups. The greatest body of evidence was for rotation oscillation brushes which demonstrated a statistically significant reduction in plaque and gingivitis at both time points. Powered toothbrushes reduce plaque and gingivitis more than manual toothbrushing in the short and long term. The clinical importance of these findings remains unclear. Observation of methodological guidelines and greater standardisation of design would benefit both future trials and meta-analyses. Cost, reliability and side effects were inconsistently reported. Any reported side effects were localised and only temporary.
Formentini-Schmitt, Dalila Maria; Fagundes-Klen, Márcia Regina; Veit, Márcia Teresinha; Palácio, Soraya Moreno; Trigueros, Daniela Estelita Goes; Bergamasco, Rosangela; Mateus, Gustavo Affonso Pisano
2018-03-02
In this work, the coagulation/flocculation/sedimentation treatment of dairy wastewater samples was investigated through serial factorial designs utilizing the saline extract obtained from Moringa oleifera (Moringa) as a coagulant. The sedimentation time (ST), pH, Moringa coagulant (MC) dose and concentration of CaCl 2 have been evaluated through the response surface methodology in order to obtain the ideal turbidity removal (TR) conditions. The empirical quadratic model, in conjunction with the desirability function, demonstrated that it is possible to obtain TRs of 98.35% using a coagulant dose, concentration of CaCl 2 and pH of 280 mg L -1 , 0.8 mol L -1 and 9, respectively. The saline extract from Moringa presented its best efficiency at an alkaline pH, which influenced the reduction of the ST to a value of 25 min. It was verified that the increase in the solubility of the proteins in the Moringa stimulated the reduction of the coagulant content in the reaction medium, and it is related to the use of calcium chloride as an extracting agent of these proteins. The MC proved to be an excellent alternative for the dairy wastewater treatment, compared to the traditional coagulants.
Chen, Manhua; Sui, Xiao; Ma, Xixiu; Feng, Xiaomei; Han, Yuqian
2015-03-30
Supercritical carbon dioxide (SC-CO2 ) has been shown to have a good pasteurising effect on food. However, very few research papers have investigated the possibility to exploit this treatment for solid foods, particularly for seafood. Considering the microbial safety of raw seafood consumption, the study aimed to explore the feasibility of microbial inactivation of shrimp (Metapenaeus ensis) and conch (Rapana venosa) by SC-CO2 treatment. Response surface methodology (RSM) models were established to predict and analyse the SC-CO2 process. A 3.69-log reduction in the total aerobic plate count (TPC) of shrimp was observed by SC-CO2 treatment at 53°C, 15 MPa for 40 min, and the logarithmic reduction in TPC of conch was 3.31 at 55°C, 14 MPa for 42 min. Sensory scores of the products achieved approximately 8 (desirable). The optimal parameters for microbial inactivation of shrimp and conch by SC-CO2 might be 55°C, 15 MPa and 40 min. SC-CO2 exerted a strong bactericidal effect on the TPC of shrimp and conch, and the products maintained good organoleptic properties. This study verified the feasibility of microbial inactivation of shrimp and conch by SC-CO2 treatment. © 2014 Society of Chemical Industry.
Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.
Heinonen, Kristiina
2015-05-01
To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.
Potential reductions in ambient NO2 concentrations from meeting diesel vehicle emissions standards
NASA Astrophysics Data System (ADS)
von Schneidemesser, Erika; Kuik, Friderike; Mar, Kathleen A.; Butler, Tim
2017-11-01
Exceedances of the concentration limit value for ambient nitrogen dioxide (NO2) at roadside sites are an issue in many cities throughout Europe. This is linked to the emissions of light duty diesel vehicles which have on-road emissions that are far greater than the regulatory standards. These exceedances have substantial implications for human health and economic loss. This study explores the possible gains in ambient air quality if light duty diesel vehicles were able to meet the regulatory standards (including both emissions standards from Europe and the United States). We use two independent methods: a measurement-based and a model-based method. The city of Berlin is used as a case study. The measurement-based method used data from 16 monitoring stations throughout the city of Berlin to estimate annual average reductions in roadside NO2 of 9.0 to 23 µg m-3 and in urban background NO2 concentrations of 1.2 to 2.7 µg m-3. These ranges account for differences in fleet composition assumptions, and the stringency of the regulatory standard. The model simulations showed reductions in urban background NO2 of 2.0 µg m-3, and at the scale of the greater Berlin area of 1.6 to 2.0 µg m-3 depending on the setup of the simulation and resolution of the model. Similar results were found for other European cities. The similarities in results using the measurement- and model-based methods support our ability to draw robust conclusions that are not dependent on the assumptions behind either methodology. The results show the significant potential for NO2 reductions if regulatory standards for light duty diesel vehicles were to be met under real-world operating conditions. Such reductions could help improve air quality by reducing NO2 exceedances in urban areas, but also have broader implications for improvements in human health and other benefits.
Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping
NASA Technical Reports Server (NTRS)
Gibbs, Gary P.; Cabell, Randolph H.
2003-01-01
A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.
Evaluating a policing strategy intended to disrupt an illicit street-level drug market.
Corsaro, Nicholas; Brunson, Rod K; McGarrell, Edmund F
2010-12-01
The authors examined a strategic policing initiative that was implemented in a high crime Nashville, Tennessee neighborhood by utilizing a mixed-methodological evaluation approach in order to provide (a) a descriptive process assessment of program fidelity; (b) an interrupted time-series analysis relying upon generalized linear models; (c) in-depth resident interviews. Results revealed that the initiative corresponded with a statistically significant reduction in drug and narcotics incidents as well as perceived changes in neighborhood disorder within the target community. There was less-clear evidence, however, of a significant impact on other outcomes examined. The implications that an intensive crime prevention strategy corresponded with a reduction in specific forms of neighborhood crime illustrates the complex considerations that law enforcement officials face when deciding to implement this type of crime prevention initiative.
NASA Technical Reports Server (NTRS)
Ferraro, R.; Some, R.
2002-01-01
The growth in data rates of instruments on future NASA spacecraft continues to outstrip the improvement in communications bandwidth and processing capabilities of radiation-hardened computers. Sophisticated autonomous operations strategies will further increase the processing workload. Given the reductions in spacecraft size and available power, standard radiation hardened computing systems alone will not be able to address the requirements of future missions. The REE project was intended to overcome this obstacle by developing a COTS- based supercomputer suitable for use as a science and autonomy data processor in most space environments. This development required a detailed knowledge of system behavior in the presence of Single Event Effect (SEE) induced faults so that mitigation strategies could be designed to recover system level reliability while maintaining the COTS throughput advantage. The REE project has developed a suite of tools and a methodology for predicting SEU induced transient fault rates in a range of natural space environments from ground-based radiation testing of component parts. In this paper we provide an overview of this methodology and tool set with a concentration on the radiation fault model and its use in the REE system development methodology. Using test data reported elsewhere in this and other conferences, we predict upset rates for a particular COTS single board computer configuration in several space environments.
Palla, A; Gnecco, I; La Barbera, P
2017-04-15
In the framework of storm water management, Domestic Rainwater Harvesting (DRWH) systems are recently recognized as source control solutions according to LID principles. In order to assess the impact of these systems in storm water runoff control, a simple methodological approach is proposed. The hydrologic-hydraulic modelling is undertaken using EPA SWMM; the DRWH is implemented in the model by using a storage unit linked to the building water supply system and to the drainage network. The proposed methodology has been implemented for a residential urban block located in Genoa (Italy). Continuous simulations are performed by using the high-resolution rainfall data series for the ''do nothing'' and DRWH scenarios. The latter includes the installation of a DRWH system for each building of the urban block. Referring to the test site, the peak and volume reduction rate evaluated for the 2125 rainfall events are respectively equal to 33 and 26 percent, on average (with maximum values of 65 percent for peak and 51 percent for volume). In general, the adopted methodology indicates that the hydrologic performance of the storm water drainage network equipped with DRWH systems is noticeable even for the design storm event (T = 10 years) and the rainfall depth seems to affect the hydrologic performance at least when the total depth exceeds 20 mm. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Crumbie, Robyn L.
2006-01-01
The reactions use recyclable Magtrieve as the oxidant in a simple reaction sequence illustrating the reciprocity of oxidation and reduction processes. The reciprocity of oxidation and reduction reactions are explored while undertaking the reactions in an environmentally friendly manner.
TRACI - THE TOOL FOR THE REDUCTION AND ASSESSMENT OF CHEMICAL AND OTHER ENVIRONMENTAL IMPACTS
TRACI, The Tool for the Reduction and Assessment of Chemical and other environmental Impacts, is described along with its history, the underlying research, methodologies, and insights within individual impact categories. TRACI facilitates the characterization of stressors that ma...
Performance-based, cost- and time-effective pcb analytical methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarado, J. S.
1998-06-11
Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the newmore » sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval.« less
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa
2018-04-15
This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.
Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.
Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
[Income-related health inequalities in France in 2004: Decomposition and explanations].
Tubeuf, S
2009-10-01
This analysis supplements existing work on social health inequalities at two levels: the measurement of health and the measurement of inequalities. Firstly, individual health status was measured using a subjective health indicator corrected within a promising cardinalisation method which had not yet been carried out on French data. Secondly, this study used an innovative methodology to measure income-related health inequalities, to understand the relationships between income, income inequality, various social determinants, and health. The analysis was based on a sample of working-age adults from the 2004 Health and Health Insurance Survey. The methodology used in the study measures the total income-related health inequality using the concentration index. This index is based on a linear model explaining health according to several individual characteristics, such as age, sex, and various socioeconomic characteristics. The method thus takes into account both the causal relationships between the various explicative factors introduced in the model and their relationship with health. Furthermore, it concretely measures the contribution of the social determinants to income-related health inequalities. The results show an income-related health inequality favouring individuals with a higher income. Moreover, income level, supplementary private health insurance, education level, and social class account for the main contributions to inequality. Therefore, the decomposition method highlights population groups that policies should target. The study suggests that reducing income inequality is not sufficient to lower income-related health inequalities in France in 2004 and needs to be supplemented with the reduction of the relationship between income and health and the reduction of income inequality over socioeconomic status.
A systematic comparison of the closed shoulder reduction techniques.
Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J
2017-05-01
To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.
Biosecurity Risk Assessment Methodology (BioRAM) v. 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
CASKEY, SUSAN; GAUDIOSO, JENNIFER; SALERNO, REYNOLDS
Sandia National Laboratories International Biological Threat Reduction Dept (SNL/IBTR) has an ongoing mission to enhance biosecurity assessment methodologies, tools, and guise. These will aid labs seeking to implement biosecurity as advocated in the recently released WHO's Biorisk Management: Lab Biosecurity Guidance. BioRAM 2.0 is the software tool developed initially using the SNL LDRD process and designed to complement the "Laboratory Biosecurity Risk Handbook" written by Ren Salerno and Jennifer Gaudioso defining biosecurity risk assessment methodologies.
Random forest feature selection approach for image segmentation
NASA Astrophysics Data System (ADS)
Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin
2017-03-01
In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.
Comprehensible knowledge model creation for cancer treatment decision making.
Afzal, Muhammad; Hussain, Maqbool; Ali Khan, Wajahat; Ali, Taqdir; Lee, Sungyoung; Huh, Eui-Nam; Farooq Ahmad, Hafiz; Jamshed, Arif; Iqbal, Hassan; Irfan, Muhammad; Abbas Hydari, Manzar
2017-03-01
A wealth of clinical data exists in clinical documents in the form of electronic health records (EHRs). This data can be used for developing knowledge-based recommendation systems that can assist clinicians in clinical decision making and education. One of the big hurdles in developing such systems is the lack of automated mechanisms for knowledge acquisition to enable and educate clinicians in informed decision making. An automated knowledge acquisition methodology with a comprehensible knowledge model for cancer treatment (CKM-CT) is proposed. With the CKM-CT, clinical data are acquired automatically from documents. Quality of data is ensured by correcting errors and transforming various formats into a standard data format. Data preprocessing involves dimensionality reduction and missing value imputation. Predictive algorithm selection is performed on the basis of the ranking score of the weighted sum model. The knowledge builder prepares knowledge for knowledge-based services: clinical decisions and education support. Data is acquired from 13,788 head and neck cancer (HNC) documents for 3447 patients, including 1526 patients of the oral cavity site. In the data quality task, 160 staging values are corrected. In the preprocessing task, 20 attributes and 106 records are eliminated from the dataset. The Classification and Regression Trees (CRT) algorithm is selected and provides 69.0% classification accuracy in predicting HNC treatment plans, consisting of 11 decision paths that yield 11 decision rules. Our proposed methodology, CKM-CT, is helpful to find hidden knowledge in clinical documents. In CKM-CT, the prediction models are developed to assist and educate clinicians for informed decision making. The proposed methodology is generalizable to apply to data of other domains such as breast cancer with a similar objective to assist clinicians in decision making and education. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashford, Mike
The report describes the prospects for energy efficiency and greenhouse gas emissions reductions in Mexico, along with renewable energy potential. A methodology for developing emissions baselines is shown, in order to prepare project emissions reductions calculations. An application to the USIJI program was also prepared through this project, for a portfolio of energy efficiency projects.
This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...
Policy implications of uncertainty in modeled life-cycle greenhouse gas emissions of biofuels.
Mullins, Kimberley A; Griffin, W Michael; Matthews, H Scott
2011-01-01
Biofuels have received legislative support recently in California's Low-Carbon Fuel Standard and the Federal Energy Independence and Security Act. Both present new fuel types, but neither provides methodological guidelines for dealing with the inherent uncertainty in evaluating their potential life-cycle greenhouse gas emissions. Emissions reductions are based on point estimates only. This work demonstrates the use of Monte Carlo simulation to estimate life-cycle emissions distributions from ethanol and butanol from corn or switchgrass. Life-cycle emissions distributions for each feedstock and fuel pairing modeled span an order of magnitude or more. Using a streamlined life-cycle assessment, corn ethanol emissions range from 50 to 250 g CO(2)e/MJ, for example, and each feedstock-fuel pathway studied shows some probability of greater emissions than a distribution for gasoline. Potential GHG emissions reductions from displacing fossil fuels with biofuels are difficult to forecast given this high degree of uncertainty in life-cycle emissions. This uncertainty is driven by the importance and uncertainty of indirect land use change emissions. Incorporating uncertainty in the decision making process can illuminate the risks of policy failure (e.g., increased emissions), and a calculated risk of failure due to uncertainty can be used to inform more appropriate reduction targets in future biofuel policies.
The Global Tsunami Model (GTM)
NASA Astrophysics Data System (ADS)
Thio, H. K.; Løvholt, F.; Harbitz, C. B.; Polet, J.; Lorito, S.; Basili, R.; Volpe, M.; Romano, F.; Selva, J.; Piatanesi, A.; Davies, G.; Griffin, J.; Baptista, M. A.; Omira, R.; Babeyko, A. Y.; Power, W. L.; Salgado Gálvez, M.; Behrens, J.; Yalciner, A. C.; Kanoglu, U.; Pekcan, O.; Ross, S.; Parsons, T.; LeVeque, R. J.; Gonzalez, F. I.; Paris, R.; Shäfer, A.; Canals, M.; Fraser, S. A.; Wei, Y.; Weiss, R.; Zaniboni, F.; Papadopoulos, G. A.; Didenkulova, I.; Necmioglu, O.; Suppasri, A.; Lynett, P. J.; Mokhtari, M.; Sørensen, M.; von Hillebrandt-Andrade, C.; Aguirre Ayerbe, I.; Aniel-Quiroga, Í.; Guillas, S.; Macias, J.
2016-12-01
The large tsunami disasters of the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.
The Global Tsunami Model (GTM)
NASA Astrophysics Data System (ADS)
Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K.
2017-12-01
The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.
The Global Tsunami Model (GTM)
NASA Astrophysics Data System (ADS)
Løvholt, Finn
2017-04-01
The large tsunami disasters of the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Rodovalho, Edmo da Cunha; Lima, Hernani Mota; de Tomi, Giorgio
2016-05-01
The mining operations of loading and haulage have an energy source that is highly dependent on fossil fuels. In mining companies that select trucks for haulage, this input is the main component of mining costs. How can the impact of the operational aspects on the diesel consumption of haulage operations in surface mines be assessed? There are many studies relating the consumption of fuel trucks to several variables, but a methodology that prioritizes higher-impact variables under each specific condition is not available. Generic models may not apply to all operational settings presented in the mining industry. This study aims to create a method of analysis, identification, and prioritization of variables related to fuel consumption of haul trucks in open pit mines. For this purpose, statistical analysis techniques and mathematical modelling tools using multiple linear regressions will be applied. The model is shown to be suitable because the results generate a good description of the fuel consumption behaviour. In the practical application of the method, the reduction of diesel consumption reached 10%. The implementation requires no large-scale investments or very long deadlines and can be applied to mining haulage operations in other settings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sreedhar, B; Reddy, P Surendra; Devi, D Keerthi
2009-11-20
This note describes the direct reductive amination of carbonyl compounds with nitroarenes using gum acacia-palladium nanoparticles, employing molecular hydrogen as the reductant. This methodology is found to be applicable to both aliphatic and aromatic aldehydes and a wide range of nitroarenes. The operational simplicity and the mild reaction conditions add to the value of this method as a practical alternative to the reductive amination of carbonyl compounds.
Integrated active and passive control design methodology for the LaRC CSI evolutionary model
NASA Technical Reports Server (NTRS)
Voth, Christopher T.; Richards, Kenneth E., Jr.; Schmitz, Eric; Gehling, Russel N.; Morgenthaler, Daniel R.
1994-01-01
A general design methodology to integrate active control with passive damping was demonstrated on the NASA LaRC CSI Evolutionary Model (CEM), a ground testbed for future large, flexible spacecraft. Vibration suppression controllers designed for Line-of Sight (LOS) minimization were successfully implemented on the CEM. A frequency-shaped H2 methodology was developed, allowing the designer to specify the roll-off of the MIMO compensator. A closed loop bandwidth of 4 Hz, including the six rigid body modes and the first three dominant elastic modes of the CEM was achieved. Good agreement was demonstrated between experimental data and analytical predictions for the closed loop frequency response and random tests. Using the Modal Strain Energy (MSE) method, a passive damping treatment consisting of 60 viscoelastically damped struts was designed, fabricated and implemented on the CEM. Damping levels for the targeted modes were more than an order of magnitude larger than for the undamped structure. Using measured loss and stiffness data for the individual damped struts, analytical predictions of the damping levels were very close to the experimental values in the (1-10) Hz frequency range where the open loop model matched the experimental data. An integrated active/passive controller was successfully implemented on the CEM and was evaluated against an active-only controller. A two-fold increase in the effective control bandwidth and further reductions of 30 percent to 50 percent in the LOS RMS outputs were achieved compared to an active-only controller. Superior performance was also obtained compared to a High-Authority/Low-Authority (HAC/LAC) controller.
NASA Astrophysics Data System (ADS)
Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.
2015-07-01
Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i.e. toluene) oxidation and "more realistic" plant mesocosm systems, demonstrates that such an ensemble of chemometric mapping has the potential to be used for the classification of more complex spectra of unknown origin. More specifically, the addition of mesocosm data from fig and birch tree experiments shows that isoprene and monoterpene emitting sources, respectively, can be mapped onto the statistical model structure and their positional vectors can provide insight into their biological sources and controlling oxidative chemistry. The potential to extend the methodology to the analysis of ambient air is discussed using results obtained from a zero-dimensional box model incorporating mechanistic data obtained from the Master Chemical Mechanism (MCMv3.2). Such an extension to analysing ambient air would prove a powerful asset in assisting with the identification of SOA sources and the elucidation of the underlying chemical mechanisms involved.
NASA Technical Reports Server (NTRS)
Huston, R. J. (Compiler)
1982-01-01
The establishment of a realistic plan for NASA and the U.S. helicopter industry to develop a design-for-noise methodology, including plans for the identification and development of promising noise reduction technology was discussed. Topics included: noise reduction techniques, scaling laws, empirical noise prediction, psychoacoustics, and methods of developing and validing noise prediction methods.
Evaluating a Health Risk Reduction Program.
ERIC Educational Resources Information Center
Nagelberg, Daniel B.
1981-01-01
A health risk reduction program at Bowling Green State University (Ohio) tested the efficacy of peer education against the efficacy of returning (by mail) health questionnaire results. A peer health education program did not appear to be effective in changing student attitudes or lifestyles; however, the research methodology may not have been…
Vibration and noise characteristics of an elevated box girder paved with different track structures
NASA Astrophysics Data System (ADS)
Li, Xiaozhen; Liang, Lin; Wang, Dangxiong
2018-07-01
The vibration and noise of elevated concrete box girders (ECBGs) are now among the most concerned issues in the field of urban rail transit (URT) systems. The track structure, belonging to critical load-transfer components, directly affects the characteristics of loading transmission into bridge, as well as the noise radiation from such system, which further determines the reduction of vibration and noise in ECBGs significantly. In order to investigate the influence of different track structures on the vibration and structure-borne noise of ECBGs, a frequency-domain theoretical model of vehicle-track coupled system, taking into account the effect of multiple wheels, is firstly established in the present work. The analysis of track structures focuses on embedded sleepers, trapezoidal sleepers, and steel-spring floating slabs (SSFS). Next, a vibration and noise field test was performed, with regard to a 30 m simple supported ECBG (with the embedded-sleeper track structure) of an URT system. Based on the tested results, two numerical models, involving a finite element model for the vibration analysis, as well as a statistical energy analysis (SEA) model for the prediction of the noise radiation, are established and validated. The results of the numerical simulations and the field tests are well matched, which offers opportunities to predict the vibration and structure-borne noise of ECBGs by the proposed modelling methodology. From the comparison between the different types of track structures, the spatial distribution and reduction effect of vibration and noise are lastly studied. The force applied on ECBG is substantially determined by both the wheel-rail force (external factor) and the transmission rate of track structure (internal factor). The SSFS track is the most effective for vibration and noise reduction of ECBGs, followed in descending order by the trapezoidal-sleeper and embedded-sleeper tracks. The above result provides a theoretical basis for the vibration and noise reduction design of urban rail transit systems.
NASA Astrophysics Data System (ADS)
Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie
2009-06-01
Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2014-05-01
Modeling complex vibroacoustic systems including poroelastic materials using finite element based methods can be unfeasible for practical applications. For this reason, analytical approaches such as the transfer matrix method are often preferred to obtain a quick estimation of the vibroacoustic parameters. However, the strong assumptions inherent within the transfer matrix method lead to a lack of accuracy in the description of the geometry of the system. As a result, the transfer matrix method is inherently limited to the high frequency range. Nowadays, hybrid substructuring procedures have become quite popular. Indeed, different modeling techniques are typically sought to describe complex vibroacoustic systems over the widest possible frequency range. As a result, the flexibility and accuracy of the finite element method and the efficiency of the transfer matrix method could be coupled in a hybrid technique to obtain a reduction of the computational burden. In this work, a hybrid methodology is proposed. The performances of the method in predicting the vibroacoutic indicators of flat structures with attached homogeneous acoustic treatments are assessed. The results prove that, under certain conditions, the hybrid model allows for a reduction of the computational effort while preserving enough accuracy with respect to the full finite element solution.
Aerodynamic Simulation of Ice Accretion on Airfoils
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Addy, Harold E., Jr.; Bragg, Michael B.; Busch, Greg T.; Montreuil, Emmanuel
2011-01-01
This report describes recent improvements in aerodynamic scaling and simulation of ice accretion on airfoils. Ice accretions were classified into four types on the basis of aerodynamic effects: roughness, horn, streamwise, and spanwise ridge. The NASA Icing Research Tunnel (IRT) was used to generate ice accretions within these four types using both subscale and full-scale models. Large-scale, pressurized windtunnel testing was performed using a 72-in.- (1.83-m-) chord, NACA 23012 airfoil model with high-fidelity, three-dimensional castings of the IRT ice accretions. Performance data were recorded over Reynolds numbers from 4.5 x 10(exp 6) to 15.9 x 10(exp 6) and Mach numbers from 0.10 to 0.28. Lower fidelity ice-accretion simulation methods were developed and tested on an 18-in.- (0.46-m-) chord NACA 23012 airfoil model in a small-scale wind tunnel at a lower Reynolds number. The aerodynamic accuracy of the lower fidelity, subscale ice simulations was validated against the full-scale results for a factor of 4 reduction in model scale and a factor of 8 reduction in Reynolds number. This research has defined the level of geometric fidelity required for artificial ice shapes to yield aerodynamic performance results to within a known level of uncertainty and has culminated in a proposed methodology for subscale iced-airfoil aerodynamic simulation.
The Statistical point of view of Quality: the Lean Six Sigma methodology
Viti, Andrea; Terzi, Alberto
2015-01-01
Six Sigma and Lean are two quality improvement methodologies. The Lean Six Sigma methodology is applicable to repetitive procedures. Therefore, the use of this methodology in the health-care arena has focused mainly on areas of business operations, throughput, and case management and has focused on efficiency outcomes. After the revision of methodology, the paper presents a brief clinical example of the use of Lean Six Sigma as a quality improvement method in the reduction of the complications during and after lobectomies. Using Lean Six Sigma methodology, the multidisciplinary teams could identify multiple modifiable points across the surgical process. These process improvements could be applied to different surgical specialties and could result in a measurement, from statistical point of view, of the surgical quality. PMID:25973253
The Statistical point of view of Quality: the Lean Six Sigma methodology.
Bertolaccini, Luca; Viti, Andrea; Terzi, Alberto
2015-04-01
Six Sigma and Lean are two quality improvement methodologies. The Lean Six Sigma methodology is applicable to repetitive procedures. Therefore, the use of this methodology in the health-care arena has focused mainly on areas of business operations, throughput, and case management and has focused on efficiency outcomes. After the revision of methodology, the paper presents a brief clinical example of the use of Lean Six Sigma as a quality improvement method in the reduction of the complications during and after lobectomies. Using Lean Six Sigma methodology, the multidisciplinary teams could identify multiple modifiable points across the surgical process. These process improvements could be applied to different surgical specialties and could result in a measurement, from statistical point of view, of the surgical quality.
Hazard interactions and interaction networks (cascades) within multi-hazard methodologies
NASA Astrophysics Data System (ADS)
Gill, Joel C.; Malamud, Bruce D.
2016-08-01
This paper combines research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between multi-layer single-hazard approaches and multi-hazard approaches that integrate such interactions. This synthesis suggests that ignoring interactions between important environmental and anthropogenic processes could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. In this paper we proceed to present an enhanced multi-hazard framework through the following steps: (i) description and definition of three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment, (ii) outlining of three types of interaction relationship (triggering, increased probability, and catalysis/impedance), and (iii) assessment of the importance of networks of interactions (cascades) through case study examples (based on the literature, field observations and semi-structured interviews). We further propose two visualisation frameworks to represent these networks of interactions: hazard interaction matrices and hazard/process flow diagrams. Our approach reinforces the importance of integrating interactions between different aspects of the Earth system, together with human activity, into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability between successive hazards, and (iii) prioritise resource allocation for mitigation and disaster risk reduction.
Junyong Zhu; G.S. Wang; X.J. Pan; Roland Gleisner
2009-01-01
Sieving methods have been almost exclusively used for feedstock size-reduction characterization in the biomass refining literature. This study demonstrates a methodology to properly characterize specific surface of biomass substrates through two dimensional measurement of each fiber of the substrate using a wet imaging technique. The methodology provides more...
NASA Astrophysics Data System (ADS)
González-Riancho, P.; Aguirre-Ayerbe, I.; Aniel-Quiroga, I.; Abad, S.; González, M.; Larreynaga, J.; Gavidia, F.; Gutiérrez, O. Q.; Álvarez-Gómez, J. A.; Medina, R.
2013-12-01
Advances in the understanding and prediction of tsunami impacts allow the development of risk reduction strategies for tsunami-prone areas. This paper presents an integral framework for the formulation of tsunami evacuation plans based on tsunami vulnerability assessment and evacuation modelling. This framework considers (i) the hazard aspects (tsunami flooding characteristics and arrival time), (ii) the characteristics of the exposed area (people, shelters and road network), (iii) the current tsunami warning procedures and timing, (iv) the time needed to evacuate the population, and (v) the identification of measures to improve the evacuation process. The proposed methodological framework aims to bridge between risk assessment and risk management in terms of tsunami evacuation, as it allows for an estimation of the degree of evacuation success of specific management options, as well as for the classification and prioritization of the gathered information, in order to formulate an optimal evacuation plan. The framework has been applied to the El Salvador case study, demonstrating its applicability to site-specific response times and population characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, Jason C.; Parker, Graham B.
This report is the second in a series of three reports describing the potential of GE’s DR-enabled appliances to provide benefits to the utility grid. The first report described the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The third report will explore the technical capability of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation.more » In this report, a series of analytical methods were presented to estimate the potential cost benefit of smart appliances while utilizing demand response. Previous work estimated the potential technical benefit (i.e., peak reduction) of smart appliances, while this report focuses on the monetary value of that participation. The effects on wholesale energy cost and possible additional revenue available by participating in frequency regulation and spinning reserve markets were explored.« less
Development of a simulated smart pump interface.
Elias, Beth L; Moss, Jacqueline A; Shih, Alan; Dillavou, Marcus
2014-01-01
Medical device user interfaces are increasingly complex, resulting in a need for evaluation in clinicallyaccurate settings. Simulation of these interfaces can allow for evaluation, training, and use for research without the risk of harming patients and with a significant cost reduction over using the actual medical devices. This pilot project was phase 1 of a study to define and evaluate a methodology for development of simulated medical device interface technology to be used for education, device development, and research. Digital video and audio recordings of interface interactions were analyzed to develop a model of a smart intravenous medication infusion pump user interface. This model was used to program a high-fidelity simulated smart intravenous medication infusion pump user interface on an inexpensive netbook platform.
Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...
2015-11-10
Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less
NASA Astrophysics Data System (ADS)
Siami, A.; Karimi, H. R.; Cigada, A.; Zappa, E.; Sabbioni, E.
2018-01-01
Preserving cultural heritage against earthquake and ambient vibrations can be an attractive topic in the field of vibration control. This paper proposes a passive vibration isolator methodology based on inerters for improving the performance of the isolation system of the famous statue of Michelangelo Buonarroti Pietà Rondanini. More specifically, a five-degree-of-freedom (5DOF) model of the statue and the anti-seismic and anti-vibration base is presented and experimentally validated. The parameters of this model are tuned according to the experimental tests performed on the assembly of the isolator and the structure. Then, the developed model is used to investigate the impact of actuation devices such as tuned mass-damper (TMD) and tuned mass-damper-inerter (TMDI) in vibration reduction of the structure. The effect of implementation of TMDI on the 5DOF model is shown based on physical limitations of the system parameters. Simulation results are provided to illustrate effectiveness of the passive element of TMDI in reduction of the vibration transmitted to the statue in vertical direction. Moreover, the optimal design parameters of the passive system such as frequency and damping coefficient will be calculated using two different performance indexes. The obtained optimal parameters have been evaluated by using two different optimization algorithms: the sequential quadratic programming method and the Firefly algorithm. The results prove significant reduction in the transmitted vibration to the structure in the presence of the proposed tuned TMDI, without imposing a large amount of mass or modification to the structure of the isolator.
Pearson, Amber L; van der Deen, Frederieke S; Wilson, Nick; Cobiac, Linda; Blakely, Tony
2015-03-01
To inform endgame strategies in tobacco control, this study aimed to estimate the impact of interventions that markedly reduced availability of tobacco retail outlets. The setting was New Zealand, a developed nation where the government has a smoke-free nation goal in 2025. Various legally mandated reductions in outlets that were phased in over 10 years were modelled. Geographic analyses using the road network were used to estimate the distance and time travelled from centres of small areas to the reduced number of tobacco outlets, and from there to calculate increased travel costs for each intervention. Age-specific price elasticities of demand were used to estimate future smoking prevalence. With a law that required a 95% reduction in outlets, the cost of a pack of 20 cigarettes (including travel costs) increased by 20% in rural areas and 10% elsewhere and yielded a smoking prevalence of 9.6% by 2025 (compared with 9.9% with no intervention). The intervention that permitted tobacco sales at only 50% of liquor stores resulted in the largest cost increase (∼$60/pack in rural areas) and the lowest prevalence (9.1%) by 2025. Elimination of outlets within 2 km of schools produced a smoking prevalence of 9.3%. This modelling merges geographic, economic and epidemiological methodologies in a novel way, but the results should be interpreted cautiously and further research is desirable. Nevertheless, the results still suggest that tobacco outlet reduction interventions could modestly contribute to an endgame goal. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
The ALMA CONOPS project: the impact of funding decisions on observatory performance
NASA Astrophysics Data System (ADS)
Ibsen, Jorge; Hibbard, John; Filippi, Giorgio
2014-08-01
In time when every penny counts, many organizations are facing the question of how much scientific impact a budget cut can have or, putting it in more general terms, which is the science impact of alternative (less costly) operational modes. In reply to such question posted by the governing bodies, the ALMA project had to develop a methodology (ALMA Concepts for Operations, CONOPS) that attempts to measure the impact that alternative operational scenarios may have on the overall scientific production of the Observatory. Although the analysis and the results are ALMA specific, the developed approach is rather general and provides a methodology for a cost-performance analysis of alternatives before any radical alterations to the operations model are adopted. This paper describes the key aspects of the methodology: a) the definition of the Figures of Merit (FoMs) for the assessment of quantitative science performance impacts as well as qualitative impacts, and presents a methodology using these FoMs to evaluate the cost and impact of the different operational scenarios; b) the definition of a REFERENCE operational baseline; c) the identification of Alternative Scenarios each replacing one or more concepts in the REFERENCE by a different concept that has a lower cost and some level of scientific and/or operational impact; d) the use of a Cost-Performance plane to graphically combine the effects that the alternative scenarios can have in terms of cost reduction and affected performance. Although is a firstorder assessment, we believe this approach is useful for comparing different operational models and to understand the cost performance impact of these choices. This can be used to take decision to meet budget cuts as well as in evaluating possible new emergent opportunities.
Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.
Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam
2016-01-01
We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.
A comprehensive methodology for intelligent systems life-cycle cost modelling
NASA Technical Reports Server (NTRS)
Korsmeyer, David J.; Lum, Henry, Jr.
1993-01-01
As NASA moves into the last part on the twentieth century, the desire to do 'business as usual' has been replaced with the mantra 'faster, cheaper, better'. Recently, new work has been done to show how the implementation of advanced technologies, such as intelligent systems, will impact the cost of a system design or in the operational cost for a spacecraft mission. The impact of the degree of autonomous or intelligent systems and human participation on a given program is manifested most significantly during the program operational phases, while the decision of who performs what tasks, and how much automation is incorporated into the system are all made during the design and development phases. Employing intelligent systems and automation is not an either/or question, but one of degree. The question is what level of automation and autonomy will provide the optimal trade-off between performance and cost. Conventional costing methodologies, however, are unable to show the significance of technologies like these in terms of traceable cost benefits and reductions in the various phases of the spacecraft's lifecycle. The proposed comprehensive life-cycle methodology can address intelligent system technologies as well as others that impact human-machine operational modes.
A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)
2017-10-01
TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY
Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.
2010-01-01
In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant
Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun
2017-01-01
Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Environmental assessment of an aircraft conversion, Montana Air National Guard, Great Falls, Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, G.; Policastro, A.; Krummel, J.
1986-08-01
It is proposed that the 120th Fighter Interceptor Group of the Montana Air National Guard convert from 18 F-106 to 18 F-16 aircraft. Associated with this conversion are building modifications, land acquisition, and facility construction. The environmental assessment determined that the primary impacts of the conversion would be positive. Noise modeling using the NOISEMAP methodology showed that the maximum noise reduction, resulting from the conversion, at any ground receptor point is about 5 dB on the L/sub dn/ scale. The noise reductions vary with the distance of a receptor point from the runways - the greater the distance, the smallermore » the noise reduction. Conversion to the F-16 prior to completion of a ''hush house'' would result in a temporary increase in noise to the southeast of the airport over a commercial and industrial area. In addition, total air pollutant emissions from aircraft operations would be reduced as a consequence of the conversion. No significant adverse impacts are predicted as a result of the conversion from F-106s to F-16s.« less
Indicators to support the dynamic evaluation of air quality models
NASA Astrophysics Data System (ADS)
Thunis, P.; Clappier, A.
2014-12-01
Air quality models are useful tools for the assessment and forecast of pollutant concentrations in the atmosphere. Most of the evaluation process relies on the “operational phase” or in other words the comparison of model results with available measurements which provides insight on the model capability to reproduce measured concentrations for a given application. But one of the key advantages of air quality models lies in their ability to assess the impact of precursor emission reductions on air quality levels. Models are then used in a dynamic mode (i.e. response to a change in a given model input data) for which evaluation of the model performances becomes a challenge. The objective of this work is to propose common indicators and diagrams to facilitate the understanding of model responses to emission changes when models are to be used for policy support. These indicators are shown to be useful to retrieve information on the magnitude of the locally produced impacts of emission reductions on concentrations with respect to the “external to the domain” contribution but also to identify, distinguish and quantify impacts arising from different factors (different precursors). In addition information about the robustness of the model results is provided. As such these indicators might reveal useful as first screening methodology to identify the feasibility of a given action as well as to prioritize the factors on which to act for an increased efficiency. Finally all indicators are made dimensionless to facilitate the comparison of results obtained with different models, different resolutions, or on different geographical areas.
Zhu, Yun; Lao, Yanwen; Jang, Carey; Lin, Chen-Jen; Xing, Jia; Wang, Shuxiao; Fu, Joshua S; Deng, Shuang; Xie, Junping; Long, Shicheng
2015-01-01
This article describes the development and implementations of a novel software platform that supports real-time, science-based policy making on air quality through a user-friendly interface. The software, RSM-VAT, uses a response surface modeling (RSM) methodology and serves as a visualization and analysis tool (VAT) for three-dimensional air quality data obtained by atmospheric models. The software features a number of powerful and intuitive data visualization functions for illustrating the complex nonlinear relationship between emission reductions and air quality benefits. The case study of contiguous U.S. demonstrates that the enhanced RSM-VAT is capable of reproducing the air quality model results with Normalized Mean Bias <2% and assisting in air quality policy making in near real time. Copyright © 2014. Published by Elsevier B.V.
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
Safety evaluation methodology for advanced coal extraction systems
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.
1981-01-01
Qualitative and quantitative evaluation methods for coal extraction systems were developed. The analysis examines the soundness of the design, whether or not the major hazards have been eliminated or reduced, and how the reduction would be accomplished. The quantitative methodology establishes the approximate impact of hazards on injury levels. The results are weighted by peculiar geological elements, specialized safety training, peculiar mine environmental aspects, and reductions in labor force. The outcome is compared with injury level requirements based on similar, safer industries to get a measure of the new system's success in reducing injuries. This approach provides a more detailed and comprehensive analysis of hazards and their effects than existing safety analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less
ERIC Educational Resources Information Center
Munoz, Marco A.
This study evaluated the Class Size Reduction (CSR) program in 34 elementary schools in Kentucky's Jefferson County Public Schools. The CSR program is a federal initiative to help elementary schools improve student learning by hiring additional teachers. Qualitative data were collected using unstructured interviews, site observations, and document…
DOT National Transportation Integrated Search
1977-07-01
The workshop focused on current methods of assessing the effectiveness of crime and vandalism reduction methods that are used in conventional urban mass transit systems, and on how they might be applied to new AGT systems. Conventional as well as nov...
Mathematical Modeling to Reduce Waste of Compounded Sterile Products in Hospital Pharmacies
Dobson, Gregory; Haas, Curtis E.; Tilson, David
2014-01-01
Abstract In recent years, many US hospitals embarked on “lean” projects to reduce waste. One advantage of the lean operational improvement methodology is that it relies on process observation by those engaged in the work and requires relatively little data. However, the thoughtful analysis of the data captured by operational systems allows the modeling of many potential process options. Such models permit the evaluation of likely waste reductions and financial savings before actual process changes are made. Thus the most promising options can be identified prospectively, change efforts targeted accordingly, and realistic targets set. This article provides one example of such a datadriven process redesign project focusing on waste reduction in an in-hospital pharmacy. A mathematical model of the medication prepared and delivered by the pharmacy is used to estimate the savings from several potential redesign options (rescheduling the start of production, scheduling multiple batches, or reordering production within a batch) as well as the impact of information system enhancements. The key finding is that mathematical modeling can indeed be a useful tool. In one hospital setting, it estimated that waste could be realistically reduced by around 50% by using several process changes and that the greatest benefit would be gained by rescheduling the start of production (for a single batch) away from the period when most order cancellations are made. PMID:25477580
Leijten, Fenna R M; van den Heuvel, Swenne G; Ybema, Jan Fekke; van der Beek, Allard J; Robroek, Suzan J W; Burdorf, Alex
2014-09-01
This study aimed to assess the influence of chronic health problems on work ability and productivity at work among older employees using different methodological approaches in the analysis of longitudinal studies. Data from employees, aged 45-64, of the longitudinal Study on Transitions in Employment, Ability and Motivation was used (N=8411). Using three annual online questionnaires, we assessed the presence of seven chronic health problems, work ability (scale 0-10), and productivity at work (scale 0-10). Three linear regression generalized estimating equations were used. The time-lag model analyzed the relation of health problems with work ability and productivity at work after one year; the autoregressive model adjusted for work ability and productivity in the preceding year; and the third model assessed the relation of incidence and recovery with changes in work ability and productivity at work within the same year. Workers with health problems had lower work ability at one-year follow-up than workers without these health problems, varying from a 2.0% reduction with diabetes mellitus to a 9.5% reduction with psychological health problems relative to the overall mean (time-lag). Work ability of persons with health problems decreased slightly more during one-year follow-up than that of persons without these health problems, ranging from 1.4% with circulatory to 5.9% with psychological health problems (autoregressive). Incidence related to larger decreases in work ability, from 0.6% with diabetes mellitus to 19.0% with psychological health problems, than recovery related to changes in work ability, from a 1.8% decrease with circulatory to an 8.5% increase with psychological health problems (incidence-recovery). Only workers with musculoskeletal and psychological health problems had lower productivity at work at one-year follow-up than workers without those health problems (1.2% and 5.6%, respectively, time-lag). All methodological approaches indicated that chronic health problems were associated with decreased work ability and, to a much lesser extent, lower productivity at work. The choice for a particular methodological approach considerably influenced the strength of the associations, with the incidence of health problems resulting in the largest decreases in work ability and productivity at work.
Generating CO(2)-credits through landfill in situ aeration.
Ritzkowski, M; Stegmann, R
2010-04-01
Landfills are some of the major anthropogenic sources of methane emissions worldwide. The installation and operation of gas extraction systems for many landfills in Europe and the US, often including technical installations for energy recovery, significantly reduced these emissions during the last decades. Residual landfill gas, however, is still continuously produced after the energy recovery became economically unattractive, thus resulting in ongoing methane emissions for many years. By landfill in situ aeration these methane emissions can be widely avoided both, during the aeration process as well as in the subsequent aftercare period. Based on model calculations and online monitoring data the amount of avoided CO(2-eq). can be determined. For an in situ aerated landfill in northern Germany, acting as a case study, 83-95% (depending on the kind and quality of top cover) of the greenhouse gas emission potential could be reduced under strictly controlled conditions. Recently the United Nations Framework Convention on Climate Change (UNFCCC) has approved a new methodology on the "Avoidance of landfill gas emissions by in situ aeration of landfills" (UNFCCC, 2009). Based on this methodology landfill aeration projects might be considered for generation of Certified Emission Reductions (CERs) in the course of CDM projects. This paper contributes towards an evaluation of the potential of landfill aeration for methane emissions reduction. Copyright 2009 Elsevier Ltd. All rights reserved.
Methodology for cost analysis of film-based and filmless portable chest systems
NASA Astrophysics Data System (ADS)
Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.
1996-05-01
Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.
Risk analysis based on hazards interactions
NASA Astrophysics Data System (ADS)
Rossi, Lauro; Rudari, Roberto; Trasforini, Eva; De Angeli, Silvia; Becker, Joost
2017-04-01
Despite an increasing need for open, transparent, and credible multi-hazard risk assessment methods, models, and tools, the availability of comprehensive risk information needed to inform disaster risk reduction is limited, and the level of interaction across hazards is not systematically analysed. Risk assessment methodologies for different hazards often produce risk metrics that are not comparable. Hazard interactions (consecutive occurrence two or more different events) are generally neglected, resulting in strongly underestimated risk assessment in the most exposed areas. This study presents cases of interaction between different hazards, showing how subsidence can affect coastal and river flood risk (Jakarta and Bandung, Indonesia) or how flood risk is modified after a seismic event (Italy). The analysis of well documented real study cases, based on a combination between Earth Observation and in-situ data, would serve as basis the formalisation of a multi-hazard methodology, identifying gaps and research frontiers. Multi-hazard risk analysis is performed through the RASOR platform (Rapid Analysis and Spatialisation Of Risk). A scenario-driven query system allow users to simulate future scenarios based on existing and assumed conditions, to compare with historical scenarios, and to model multi-hazard risk both before and during an event (www.rasor.eu).
Optimization of enzymatic hydrolysis of guar gum using response surface methodology.
Mudgil, Deepak; Barak, Sheweta; Khatkar, B S
2014-08-01
Guar gum is a polysaccharide obtained from guar seed endosperm portion. Enzymatically hydrolyzed guar gum is low in viscosity and has several health benefits as dietary fiber. In this study, response surface methodology was used to determine the optimum conditions for hydrolysis that give minimum viscosity of guar gum. Central composite was employed to investigate the effects of pH (3-7), temperature (20-60 °C), reaction time (1-5 h) and cellulase concentration (0.25-1.25 mg/g) on viscosity during enzymatic hydrolysis of guar (Cyamopsis tetragonolobus) gum. A second order polynomial model was developed for viscosity using regression analysis. Results revealed statistical significance of model as evidenced from high value of coefficient of determination (R(2) = 0.9472) and P < 0.05. Viscosity was primarily affected by cellulase concentration, pH and hydrolysis time. Maximum viscosity reduction was obtained when pH, temperature, hydrolysis time and cellulase concentration were 6, 50 °C, 4 h and 1.00 mg/g, respectively. The study is important in optimizing the enzymatic process for hydrolysis of guar gum as potential source of soluble dietary fiber for human health benefits.
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
Hays, Walter W.
1979-01-01
In accordance with the provisions of the Earthquake Hazards Reduction Act of 1977 (Public Law 95-124), the U.S. Geological Survey has developed comprehensive plans for producing information needed to assess seismic hazards and risk on a national scale in fiscal years 1980-84. These plans are based on a review of the needs of Federal Government agencies, State and local government agencies, engineers and scientists engaged in consulting and research, professional organizations and societies, model code groups, and others. The Earthquake Hazards Reduction Act provided an unprecedented opportunity for participation in a national program by representatives of State and local governments, business and industry, the design professions, and the research community. The USGS and the NSF (National Science Foundation) have major roles in the national program. The ultimate goal of the program is to reduce losses from earthquakes. Implementation of USGS research in the Earthquake Hazards Reduction Program requires the close coordination of responsibility between Federal, State and local governments. The projected research plan in national seismic hazards and risk for fiscal years 1980-84 will be accomplished by USGS and non-USGS scientists and engineers. The latter group will participate through grants and contracts. The research plan calls for (1) national maps based on existing methods, (2) improved definition of earthquake source zones nationwide, (3) development of improved methodology, (4) regional maps based on the improved methodology, and (5) post-earthquake investigations. Maps and reports designed to meet the needs, priorities, concerns, and recommendations of various user groups will be the products of this research and provide the technical basis for improved implementation.
Prophylactic antibiotics for burns patients: systematic review and meta-analysis
Avni, Tomer; Levcovich, Ariela; Ad-El, Dean D; Leibovici, Leonard
2010-01-01
Objective To assess the evidence for prophylactic treatment with systemic antibiotics in burns patients. Design Systematic review and meta-analysis of randomised or quasi-randomised controlled trials recruiting burns inpatients that compared antibiotic prophylaxis (systemic, non-absorbable, or topical) with placebo or no treatment. Data sources PubMed, Cochrane Library, LILACS, Embase, conference proceedings, and bibliographies. No language, date, or publication status restrictions were imposed. Review methods Two reviewers independently extracted data. The primary outcome was all cause mortality. Risk or rate ratios with 95% confidence intervals were pooled with a fixed effect model if no heterogeneity was present. Results 17 trials were included. Trials that assessed systemic antibiotic prophylaxis given for 4-14 days after admission showed a significant reduction in all cause mortality (risk ratio 0.54, 95% confidence interval 0.34 to 0.87, five trials). The corresponding number needed to treat was 8 (5 to 33), with a control event rate of 26%. Perioperative non-absorbable or topical antibiotics alone did not significantly affect mortality. There was a reduction in pneumonia with systemic prophylaxis and a reduction in wound infections with perioperative prophylaxis. Staphylococcus aureus infection or colonisation was reduced with anti-staphylococcal antibiotics. In three trials, resistance to the antibiotic used for prophylaxis significantly increased (rate ratio 2.84, 1.38 to 5.83). The overall methodological quality of the trials was poor. Conclusions Prophylaxis with systemic antibiotics has a beneficial effect in burns patients, but the methodological quality of the data is weak. As such prophylaxis is currently not recommended for patients with severe burns other than perioperatively, there is a need for randomised controlled trials to assess its use. PMID:20156911
Child Mortality Estimation: Accelerated Progress in Reducing Global Child Mortality, 1990–2010
Hill, Kenneth; You, Danzhen; Inoue, Mie; Oestergaard, Mikkel Z.; Hill, Kenneth; Alkema, Leontine; Cousens, Simon; Croft, Trevor; Guillot, Michel; Pedersen, Jon; Walker, Neff; Wilmoth, John; Jones, Gareth
2012-01-01
Monitoring development indicators has become a central interest of international agencies and countries for tracking progress towards the Millennium Development Goals. In this review, which also provides an introduction to a collection of articles, we describe the methodology used by the United Nations Inter-agency Group for Child Mortality Estimation to track country-specific changes in the key indicator for Millennium Development Goal 4 (MDG 4), the decline of the under-five mortality rate (the probability of dying between birth and age five, also denoted in the literature as U5MR and 5 q 0). We review how relevant data from civil registration, sample registration, population censuses, and household surveys are compiled and assessed for United Nations member states, and how time series regression models are fitted to all points of acceptable quality to establish the trends in U5MR from which infant and neonatal mortality rates are generally derived. The application of this methodology indicates that, between 1990 and 2010, the global U5MR fell from 88 to 57 deaths per 1,000 live births, and the annual number of under-five deaths fell from 12.0 to 7.6 million. Although the annual rate of reduction in the U5MR accelerated from 1.9% for the period 1990–2000 to 2.5% for the period 2000–2010, it remains well below the 4.4% annual rate of reduction required to achieve the MDG 4 goal of a two-thirds reduction in U5MR from its 1990 value by 2015. Thus, despite progress in reducing child mortality worldwide, and an encouraging increase in the pace of decline over the last two decades, MDG 4 will not be met without greatly increasing efforts to reduce child deaths. PMID:22952441
Jensen, Henning Tarp; Keogh-Brown, Marcus R; Smith, Richard D; Chalabi, Zaid; Dangour, Alan D; Davies, Mike; Edwards, Phil; Garnett, Tara; Givoni, Moshe; Griffiths, Ulla; Hamilton, Ian; Jarrett, James; Roberts, Ian; Wilkinson, Paul; Woodcock, James; Haines, Andy
We employ a single-country dynamically-recursive Computable General Equilibrium model to make health-focussed macroeconomic assessments of three contingent UK Greenhouse Gas (GHG) mitigation strategies, designed to achieve 2030 emission targets as suggested by the UK Committee on Climate Change. In contrast to previous assessment studies, our main focus is on health co-benefits additional to those from reduced local air pollution. We employ a conservative cost-effectiveness methodology with a zero net cost threshold. Our urban transport strategy (with cleaner vehicles and increased active travel) brings important health co-benefits and is likely to be strongly cost-effective; our food and agriculture strategy (based on abatement technologies and reduction in livestock production) brings worthwhile health co-benefits, but is unlikely to eliminate net costs unless new technological measures are included; our household energy efficiency strategy is likely to breakeven only over the long term after the investment programme has ceased (beyond our 20 year time horizon). We conclude that UK policy makers will, most likely, have to adopt elements which involve initial net societal costs in order to achieve future emission targets and longer-term benefits from GHG reduction. Cost-effectiveness of GHG strategies is likely to require technological mitigation interventions and/or demand-constraining interventions with important health co-benefits and other efficiency-enhancing policies that promote internalization of externalities. Health co-benefits can play a crucial role in bringing down net costs, but our results also suggest the need for adopting holistic assessment methodologies which give proper consideration to welfare-improving health co-benefits with potentially negative economic repercussions (such as increased longevity).
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Vibration study of a vehicle suspension assembly with the finite element method
NASA Astrophysics Data System (ADS)
Cătălin Marinescu, Gabriel; Castravete, Ştefan-Cristian; Dumitru, Nicolae
2017-10-01
The main steps of the present work represent a methodology of analysing various vibration effects over suspension mechanical parts of a vehicle. A McPherson type suspension from an existing vehicle was created using CAD software. Using the CAD model as input, a finite element model of the suspension assembly was developed. Abaqus finite element analysis software was used to pre-process, solve, and post-process the results. Geometric nonlinearities are included in the model. Severe sources of nonlinearities such us friction and contact are also included in the model. The McPherson spring is modelled as linear spring. The analysis include several steps: preload, modal analysis, the reduction of the model to 200 generalized coordinates, a deterministic external excitation, a random excitation that comes from different types of roads. The vibration data used as an input for the simulation were previously obtained by experimental means. Mathematical expressions used for the simulation were also presented in the paper.
NASA Astrophysics Data System (ADS)
Romo, David Ricardo
Foreign Object Debris/Damage (FOD) has been an issue for military and commercial aircraft manufacturers since the early ages of aviation and aerospace. Currently, aerospace is growing rapidly and the chances of FOD presence are growing as well. One of the principal causes in manufacturing is the human error. The cost associated with human error in commercial and military aircrafts is approximately accountable for 4 billion dollars per year. This problem is currently addressed with prevention programs, elimination techniques, and designation of FOD areas, controlled access, restrictions of personal items entering designated areas, tool accountability, and the use of technology such as Radio Frequency Identification (RFID) tags, etc. All of the efforts mentioned before, have not show a significant occurrence reduction in terms of manufacturing processes. On the contrary, a repetitive path of occurrence is present, and the cost associated has not declined in a significant manner. In order to address the problem, this thesis proposes a new approach using statistical analysis. The effort of this thesis is to create a predictive model using historical categorical data from an aircraft manufacturer only focusing in human error causes. The use of contingency tables, natural logarithm of the odds and probability transformation is used in order to provide the predicted probabilities of each aircraft. A case of study is shown in this thesis in order to show the applied methodology. As a result, this approach is able to predict the possible outcomes of FOD by the workstation/area needed, and monthly predictions per workstation. This thesis is intended to be the starting point of statistical data analysis regarding FOD in human factors. The purpose of this thesis is to identify the areas where human error is the primary cause of FOD occurrence in order to design and implement accurate solutions. The advantages of the proposed methodology can go from the reduction of cost production, quality issues, repair cost, and assembly process time. Finally, a more reliable process is achieved, and the proposed methodology may be used in other aircrafts.
Empirical Models for Quantification of Machining Damage in Composite Materials
NASA Astrophysics Data System (ADS)
Machado, Carla Maria Moreira
The tremendous growth which occurs at a global level of demand and use of composite materials brings with the need to develop new manufacturing tools and methodologies. One of the major uses of such materials, in particular plastics reinforced with carbon fibres, is their application in structural components for the aircraft industry with low weight and high stiffness. These components are produced in near-final form but the so-called secondary processes such as machining are often unavoidable. In this type of industry, drilling is the most frequent operation due to the need to obtain holes for riveting and fastening bolt assembly of structures. However, the problems arising from drilling, particularly the damage caused during the operation, may lead to rejection of components because it is an origin of lack of resistance. The delamination is the most important damage, as it causes a decrease of the mechanical properties of the components of an assembly and, irrefutably, a reduction of its reliability in use. It can also raise problems with regard to the tolerances of the assemblies. Moreover, the high speed machining is increasingly recognized to be a manufacturing technology that promotes productivity by reducing production times. However, the investigation whose focus is in high speed drilling is quite limited, and few studies on this subject have been found in the literature review. Thus, this thesis aims to investigate the effects of process variables in high speed drilling on the damage produced. The empirical models that relate the delamination damage, the thrust force and the torque with the process parameters were established using Response Surface Methodology. The process parameters considered as input factors were the spindle speed, the feed per tooth, the tool diameter and the workpiece thickness. A new method for fixing the workpiece was developed and tested. The results proved to be very promising since in the same cutting conditions and with this new methodology, it was observed a significant reduction of the delamination damage. Finally, it has been found that is possible to use high speed drilling, using conventional twist drills, to produce holes with good quality, minimizing the damage.
Mull, Hillary J; Chen, Qi; O'Brien, William J; Shwartz, Michael; Borzecki, Ann M; Hanchate, Amresh; Rosen, Amy K
2013-07-01
The Centers for Medicare and Medicaid Services' (CMS) all-cause readmission measure and the 3M Health Information System Division Potentially Preventable Readmissions (PPR) measure are both used for public reporting. These 2 methods have not been directly compared in terms of how they identify high-performing and low-performing hospitals. To examine how consistently the CMS and PPR methods identify performance outliers, and explore how the PPR preventability component impacts hospital readmission rates, public reporting on CMS' Hospital Compare website, and pay-for-performance under CMS' Hospital Readmission Reduction Program for 3 conditions (acute myocardial infarction, heart failure, and pneumonia). We applied the CMS all-cause model and the PPR software to VA administrative data to calculate 30-day observed FY08-10 VA hospital readmission rates and hospital profiles. We then tested the effect of preventability on hospital readmission rates and outlier identification for reporting and pay-for-performance by replacing the dependent variable in the CMS all-cause model (Yes/No readmission) with the dichotomous PPR outcome (Yes/No preventable readmission). The CMS and PPR methods had moderate correlations in readmission rates for each condition. After controlling for all methodological differences but preventability, correlations increased to >90%. The assessment of preventability yielded different outlier results for public reporting in 7% of hospitals; for 30% of hospitals there would be an impact on Hospital Readmission Reduction Program reimbursement rates. Despite uncertainty over which readmission measure is superior in evaluating hospital performance, we confirmed that there are differences in CMS-generated and PPR-generated hospital profiles for reporting and pay-for-performance, because of methodological differences and the PPR's preventability component.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c
NASA Astrophysics Data System (ADS)
Kuo, Ching-Wen
2010-06-01
Modern military aircraft jet engines are designed with variable geometry nozzles to provide optimum thrust in different operating conditions within the flight envelope. However, the acoustic measurements for such nozzles are scarce, due to the cost involved in making full-scale measurements and the lack of details about the exact geometry of these nozzles. Thus the present effort at The Pennsylvania State University and the NASA Glenn Research Center, in partnership with GE Aviation, is aiming to study and characterize the acoustic field produced by supersonic jets issuing from converging-diverging military style nozzles. An equally important objective is to develop a scaling methodology for using data obtained from small- and moderate-scale experiments which exhibits the independence of the jet sizes to the measured noise levels. The experimental results presented in this thesis have shown reasonable agreement between small-scale and moderate-scale jet acoustic data, as well as between heated jets and heat-simulated ones. As the scaling methodology is validated, it will be extended to using acoustic data measured with small-scale supersonic model jets to the prediction of the most important components of full-scale engine noise. When comparing the measured acoustic spectra with a microphone array set at different radial locations, the characteristics of the jet noise source distribution may induce subtle inaccuracies, depending on the conditions of jet operation. A close look is taken at the details of the noise generation region in order to better understand the mismatch between spectra measured at various acoustic field radial locations. A processing methodology was developed to correct the effect of the noise source distribution and efficiently compare near-field and far-field spectra with unprecedented accuracy. This technique then demonstrates that the measured noise levels in the physically restricted space of an anechoic chamber can be appropriately extrapolated to represent the expected noise levels at different noise monitoring locations of practical interest. With the emergence of more powerful fighter aircraft, supersonic jet noise reduction devices are being intensely researched. Small-scale measurements are a crucial step in evaluating the potential of noise reduction concepts at an early stage in the design process. With this in mind, the present thesis provides an acoustic assessment methodology for small-scale military-style nozzles with chevrons. Comparisons are made between the present measurements and those made by NASA at moderate-scale. The effect of chevrons on supersonic jets was investigated, highlighting the crucial role of the jet operating conditions on the effects of chevrons on the jet flow and the subsequent acoustic benefits. A small-scale heat simulated jet is investigated in the over-expanded condition and shows no substantial noise reduction from the chevrons. This is contrary to moderate-scale measurements. The discrepancy is attributed to a Reynolds number low enough to sustain an annular laminar boundary layer in the nozzle that separates in the over-expanded flow condition. These results are important in assessing the limitations of small-scale measurements in this particular jet noise reduction method. Lastly, to successfully present the results from the acoustic measurements of small-scale jets with high quality, a newly developed PSU free-field response was empirically derived to match the specific orientation and grid cap geometry of the microphones. Application to measured data gives encouraging results validating the capability of the method to produce superior accuracy in measurements even at the highest response frequencies of the microphones.
Chen, Yung-Yue
2018-05-08
Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Mitigating energy loss on distribution lines through the allocation of reactors
NASA Astrophysics Data System (ADS)
Miranda, T. M.; Romero, F.; Meffe, A.; Castilho Neto, J.; Abe, L. F. T.; Corradi, F. E.
2018-03-01
This paper presents a methodology for automatic reactors allocation on medium voltage distribution lines to reduce energy loss. In Brazil, some feeders are distinguished by their long lengths and very low load, which results in a high influence of the capacitance of the line on the circuit’s performance, requiring compensation through the installation of reactors. The automatic allocation is accomplished using an optimization meta-heuristic called Global Neighbourhood Algorithm. Given a set of reactor models and a circuit, it outputs an optimal solution in terms of reduction of energy loss. The algorithm is also able to verify if the voltage limits determined by the user are not being violated, besides checking for energy quality. The methodology was implemented in a software tool, which can also show the allocation graphically. A simulation with four real feeders is presented in the paper. The obtained results were able to reduce the energy loss significantly, from 50.56%, in the worst case, to 93.10%, in the best case.
ERIC Educational Resources Information Center
Cole, Elaine J.; Fieselman, Laura
2013-01-01
Purpose: The purpose of this paper is to design a community-based social marketing (CBSM) campaign to foster sustainable behavior change in paper reduction, commingled recycling, and purchasing environmentally preferred products (EPP) with faculty and staff at Pacific University Oregon. Design/methodology/approach: A CBSM campaign was developed…
Meta-analysis of medical intervention for normal tension glaucoma.
Cheng, Jin-Wei; Cai, Ji-Ping; Wei, Rui-Li
2009-07-01
To evaluate the intraocular pressure (IOP) reduction achieved by the most frequently prescribed antiglaucoma drugs in patients with normal tension glaucoma (NTG). Systematic review and meta-analysis. Fifteen randomized clinical trials reported 25 arms for peak IOP reduction, 16 arms for trough IOP reduction, and 13 arms for diurnal curve IOP reduction. Pertinent publications were identified through systematic searches of PubMed, EMBASE, and the Cochrane Controlled Trials Register. The patients had to be diagnosed as having NTG. Methodological quality was assessed by the Delphi list on a scale from 0 to 18. The pooled 1-month IOP-lowering effects were calculated using the 2-step DerSimonian and Laird estimate method of the random effects model. Absolute and relative reductions in IOP from baseline for peak and trough moments. Quality scores of included studies were generally high, with a mean quality score of 12.7 (range, 9-16). Relative IOP reductions were peak, 15% (12%-18%), and trough, 18% (8%-27%) for timolol; peak, 14% (8%-19%), and trough, 12% (-7% to 31%) for dorzolamide; peak, 24% (17%-31%), and trough, 11% (7%-14%) for brimonidine; peak, 20% (17%-24%), and trough, 20% (18%-23%) for latanoprost; peak, 21% (16%-25%), and trough, 18% (14%-22%) for bimatoprost. The differences in absolute IOP reductions between prostaglandin analogues and timolol varied from 0.9 to 1.0 mmHg at peak and -0.1 to 0.2 mmHg at trough. Latanoprost, bimatoprost, and timolol are the most effective IOP-lowering agents in patients with NTG.
Development of Methodology for Programming Autonomous Agents
NASA Technical Reports Server (NTRS)
Erol, Kutluhan; Levy, Renato; Lang, Lun
2004-01-01
A brief report discusses the rationale for, and the development of, a methodology for generating computer code for autonomous-agent-based systems. The methodology is characterized as enabling an increase in the reusability of the generated code among and within such systems, thereby making it possible to reduce the time and cost of development of the systems. The methodology is also characterized as enabling reduction of the incidence of those software errors that are attributable to the human failure to anticipate distributed behaviors caused by the software. A major conceptual problem said to be addressed in the development of the methodology was that of how to efficiently describe the interfaces between several layers of agent composition by use of a language that is both familiar to engineers and descriptive enough to describe such interfaces unambivalently
Time Savings and Surgery Task Load Reduction in Open Intraperitoneal Onlay Mesh Fixation Procedure.
Roy, Sanjoy; Hammond, Jeffrey; Panish, Jessica; Shnoda, Pullen; Savidge, Sandy; Wilson, Mark
2015-01-01
This study assessed the reduction in surgeon stress associated with savings in procedure time for mechanical fixation of an intraperitoneal onlay mesh (IPOM) compared to a traditional suture fixation in open ventral hernia repair. Nine general surgeons performed 36 open IPOM fixation procedures in porcine model. Each surgeon conducted two mechanical (using ETHICON SECURESTRAP ™ Open) and two suture fixation procedures. Fixation time was measured using a stopwatch, and related surgeon stress was assessed using the validated SURG-TLX questionnaire. T-tests were used to compare between-group differences, and a two-sided 95% confidence interval for the difference in stress levels was established using nonparametric methodology. The mechanical fixation group demonstrated an 89.1% mean reduction in fixation time, as compared to the suture group (p < 0.00001). Surgeon stress scores measured using SURG-TLX were 55.5% lower in the mechanical compared to the suture fixation group (p < 0.001). Scores in five of the six sources of stress were significantly lower for mechanical fixation. Mechanical fixation with ETHICON SECURESTRAP ™ Open demonstrated a significant reduction in fixation time and surgeon stress, which may translate into improved operating efficiency, improved performance, improved surgeon quality of life, and reduced overall costs of the procedure.
Time Savings and Surgery Task Load Reduction in Open Intraperitoneal Onlay Mesh Fixation Procedure
Roy, Sanjoy; Hammond, Jeffrey; Panish, Jessica; Shnoda, Pullen; Savidge, Sandy; Wilson, Mark
2015-01-01
Background. This study assessed the reduction in surgeon stress associated with savings in procedure time for mechanical fixation of an intraperitoneal onlay mesh (IPOM) compared to a traditional suture fixation in open ventral hernia repair. Study Design. Nine general surgeons performed 36 open IPOM fixation procedures in porcine model. Each surgeon conducted two mechanical (using ETHICON SECURESTRAPTM Open) and two suture fixation procedures. Fixation time was measured using a stopwatch, and related surgeon stress was assessed using the validated SURG-TLX questionnaire. T-tests were used to compare between-group differences, and a two-sided 95% confidence interval for the difference in stress levels was established using nonparametric methodology. Results. The mechanical fixation group demonstrated an 89.1% mean reduction in fixation time, as compared to the suture group (p < 0.00001). Surgeon stress scores measured using SURG-TLX were 55.5% lower in the mechanical compared to the suture fixation group (p < 0.001). Scores in five of the six sources of stress were significantly lower for mechanical fixation. Conclusions. Mechanical fixation with ETHICON SECURESTRAPTM Open demonstrated a significant reduction in fixation time and surgeon stress, which may translate into improved operating efficiency, improved performance, improved surgeon quality of life, and reduced overall costs of the procedure. PMID:26240834
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Automatic Inference of Cryptographic Key Length Based on Analysis of Proof Tightness
2016-06-01
within an attack tree structure, then expand attack tree methodology to include cryptographic reductions. We then provide the algorithms for...maintaining and automatically reasoning about these expanded attack trees . We provide a software tool that utilizes machine-readable proof and attack metadata...and the attack tree methodology to provide rapid and precise answers regarding security parameters and effective security. This eliminates the need
A model and plan for a longitudinal study of community response to aircraft noise
NASA Technical Reports Server (NTRS)
Gunn, W. J.; Patterson, H. P.; Cornog, J.; Klaus, P.; Connor, W. K.
1975-01-01
A new approach is discussed for the study of the effects of aircraft noise on people who live near large airports. The approach was an outgrowth of a planned study of the reactions of individuals exposed to changing aircraft noise conditions around the Dallas-Ft. Worth (DFW) regional airport. The rationale, concepts, and methods employed in the study are discussed. A critical review of major past studies traces the history of community response research in an effort to identify strengths and limitations of the various approaches and methodologies. A stress-reduction model is presented to provide a framework for studying the dynamics of human response to a changing noise environment. The development of the survey instrument is detailed, and preliminary results of pretest data are discussed.
Life-Extending Control for Aircraft Engines Studied
NASA Technical Reports Server (NTRS)
Guo, Te-Huei
2002-01-01
Current aircraft engine controllers are designed and operated to provide both performance and stability margins. However, the standard method of operation results in significant wear and tear on the engine and negatively affects the on-wing life--the time between cycles when the engine must be physically removed from the aircraft for maintenance. The NASA Glenn Research Center and its industrial and academic partners have been working together toward a new control concept that will include engine life usage as part of the control function. The resulting controller will be able to significantly extend the engine's on-wing life with little or no impact on engine performance and operability. The new controller design will utilize damage models to estimate and mitigate the rate and overall accumulation of damage to critical engine parts. The control methods will also provide a means to assess tradeoffs between performance and structural durability on the basis of mission requirements and remaining engine life. Two life-extending control methodologies were studied to reduce the overall life-cycle cost of aircraft engines. The first methodology is to modify the baseline control logic to reduce the thermomechanical fatigue (TMF) damage of cooled stators during acceleration. To accomplish this, an innovative algorithm limits the low-speed rotor acceleration command when the engine has reached a threshold close to the requested thrust. This algorithm allows a significant reduction in TMF damage with only a very small increase in the rise time to reach the commanded rotor speed. The second methodology is to reduce stress rupture/creep damage to turbine blades and uncooled stators by incorporating an engine damage model into the flight mission. Overall operation cost is reduced by an optimization among the flight time, fuel consumption, and component damages. Recent efforts have focused on applying life-extending control technology to an existing commercial turbine engine, and doing so without modifying the hardware or adding sensors. This approach makes it possible to retrofit existing engines with life-extending control technology by changing only the control software in the full-authority digital engine controller (FADEC). The significant results include demonstrating a 20- to 30-percent reduction in TMF damage to the hot section by developing and implementing smart acceleration logic during takeoff. The tradeoff is an increase, from 5.0 to 5.2 sec, in the time required to reach maximum power from ground idle. On a typical flight profile of a cruise at Mach 0.8 at an altitude of 41,000 ft, and cruise time of 104 min, the optimized system showed that a reduction in cruise speed from Mach 0.8 to 0.79 can achieve an estimated 25-to 35-percent creep/rupture damage reduction in the engine's hot section and a fuel savings of 2.1 percent. The tradeoff is an increase in flight time of 1.3 percent (1.4 min).
A Model of Reduced Kinetics for Alkane Oxidation Using Constituents and Species for N-Heptane
NASA Technical Reports Server (NTRS)
Harstad, Kenneth G.; Bellan, Josette
2011-01-01
The reduction of elementary or skeletal oxidation kinetics to a subgroup of tractable reactions for inclusion in turbulent combustion codes has been the subject of numerous studies. The skeletal mechanism is obtained from the elementary mechanism by removing from it reactions that are considered negligible for the intent of the specific study considered. As of now, there are many chemical reduction methodologies. A methodology for deriving a reduced kinetic mechanism for alkane oxidation is described and applied to n-heptane. The model is based on partitioning the species of the skeletal kinetic mechanism into lights, defined as those having a carbon number smaller than 3, and heavies, which are the complement of the species ensemble. For modeling purposes, the heavy species are mathematically decomposed into constituents, which are similar but not identical to groups in the group additivity theory. From analysis of the LLNL (Lawrence Livermore National Laboratory) skeletal mechanism in conjunction with CHEMKIN II, it is shown that a similarity variable can be formed such that the appropriately non-dimensionalized global constituent molar density exhibits a self-similar behavior over a very wide range of equivalence ratios, initial pressures and initial temperatures that is of interest for predicting n-heptane oxidation. Furthermore, the oxygen and water molar densities are shown to display a quasi-linear behavior with respect to the similarity variable. The light species ensemble is partitioned into quasi-steady and unsteady species. The reduced model is based on concepts consistent with those of Large Eddy Simulation (LES) in which functional forms are used to replace the small scales eliminated through filtering of the governing equations; in LES, these small scales are unimportant as far as the overwhelming part of dynamic energy is concerned. Here, the scales thought unimportant for recovering the thermodynamic energy are removed. The concept is tested by using tabular information from the LLNL skeletal mechanism in conjunction with CHEMKIN II utilized as surrogate ideal functions replacing the necessary functional forms. The test reveals that the similarity concept is indeed justified and that the combustion temperature is well predicted, but that the ignition time is over-predicted, a fact traced to neglecting a detailed description of the processes leading to the heavies chemical decomposition. To palliate this deficiency, functional modeling is incorporated into this conceptual reduction in addition to the modeling the evolution of the global constituent molar density, the enthalpy evolution of the heavies, the contribution to the reaction rate of the unsteady lights from other light species and from the heavies, the molar density evolution of oxygen and water, and the mole fractions of the quasisteady light species. The model is compact in that there are only nine species-related progress variables. Results are presented showing the performance of the model for predicting the temperature and species evolution. The model reproduces the ignition time over a wide range of equivalence ratios, initial pressure, and initial temperature.
How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!
NASA Astrophysics Data System (ADS)
Hassan Saddagh, Mohammad; Javad Abedini, Mohammad
2010-05-01
Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.
Optimal tyre usage for a Formula One car
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Limebeer, D. J. N.
2016-10-01
Variations in track temperature, surface conditions and layout have led tyre manufacturers to produce a range of rubber compounds for race events. Each compound has unique friction and durability characteristics. Efficient tyre management over a full race distance is a crucial component of a competitive race strategy. A minimum lap time optimal control calculation and a thermodynamic tyre wear model are used to establish optimal tyre warming and tyre usage strategies. Lap time sensitivities demonstrate that relatively small changes in control strategy can lead to significant reductions in the associated wear metrics. The illustrated methodology shows how vehicle setup parameters can be optimised for minimum tyre usage.
Hwang, Jeong-Ha; Han, Dong-Woo
2015-01-01
Economic and rapid reduction of sludge water content in sewage wastewater is difficult and requires special advanced treatment technologies. This study focused on optimizing and modeling decreased sludge water content (Y1) and removing turbidity (Y2) with magnetic iron oxide nanoparticles (Fe3O4, MION) using a central composite design (CCD) and response surface methodology (RSM). CCD and RSM were applied to evaluate and optimize the interactive effects of mixing time (X1) and MION concentration (X2) on chemical flocculent performance. The results show that the optimum conditions were 14.1 min and 22.1 mg L(-1) for response Y1 and 16.8 min and 8.85 mg L(-1) for response Y2, respectively. The two responses were obtained experimentally under this optimal scheme and fit the model predictions well (R(2) = 97.2% for Y1 and R(2) = 96.9% for Y2). A 90.8% decrease in sludge water content and turbidity removal of 29.4% were demonstrated. These results confirm that the statistical models were reliable, and that the magnetic flocculation conditions for decreasing sludge water content and removing turbidity from sewage wastewater were appropriate. The results reveal that MION are efficient for rapid separation and are a suitable alterative to sediment sludge during the wastewater treatment process.
NASA Astrophysics Data System (ADS)
Rao, S.; Dentener, F. J.; Klimont, Z.; Riahi, K.
2011-12-01
Outdoor air pollution is increasingly recognized as a significant contributor to global health outcomes. This has led to the implementation of a number of air quality policies worldwide, with total air pollution control costs in 2005 estimated at US$195 billion. More than 80% of the world's population is still found to be exposed to PM2.5 concentrations exceeding WHO air quality guidelines and health impacts resulting from these exposures estimated at around 2-5% of the global disease burden. Key questions to answer are 1) How will pollutant emissions evolve in the future given developments in the energy system and how will energy and environmental policies influence such emission trends. 2) What implications will this have for resulting exposures and related health outcomes. In order to answer these questions, varying levels of stringency of air quality legislation are analyzed in combination with policies on universal access to clean cooking fuels and limiting global temperature change to 2°C in 2100. Bottom-up methodologies using energy emissions modeling are used to derive sector-based pollutant emission trajectories until 2030. Emissions are spatially downscaled and used in combination with a global transport chemistry model to derive ambient concentrations of PM2.5. Health impacts of these exposures are further estimated consistent with WHO data and methodology. The results indicate that currently planned air quality legislation combined with rising energy demand will be insufficient in controlling future emissions growth in developing countries. In order to achieve significant reductions in pollutant emissions of the order of more than 50% from 2005 levels and reduce exposures to levels consistent with WHO standards, it will be necessary to increase the stringency of such legislations and combine them with policies on energy access and climate change. Combined policies also result in reductions in air pollution control costs as compared to those associated with current legislations. Health related co-benefits of combined policies are also found to be large, especially in developing countries- a reduction of more than 50% in terms of pollution related mortality impacts as compared to today.
NASA Astrophysics Data System (ADS)
Brennan-Tonetta, Margaret
This dissertation seeks to provide key information and a decision support tool that states can use to support long-term goals of fossil fuel displacement and greenhouse gas reductions. The research yields three outcomes: (1) A methodology that allows for a comprehensive and consistent inventory and assessment of bioenergy feedstocks in terms of type, quantity, and energy potential. Development of a standardized methodology for consistent inventorying of biomass resources fosters research and business development of promising technologies that are compatible with the state's biomass resource base. (2) A unique interactive decision support tool that allows for systematic bioenergy analysis and evaluation of policy alternatives through the generation of biomass inventory and energy potential data for a wide variety of feedstocks and applicable technologies, using New Jersey as a case study. Development of a database that can assess the major components of a bioenergy system in one tool allows for easy evaluation of technology, feedstock and policy options. The methodology and decision support tool is applicable to other states and regions (with location specific modifications), thus contributing to the achievement of state and federal goals of renewable energy utilization. (3) Development of policy recommendations based on the results of the decision support tool that will help to guide New Jersey into a sustainable renewable energy future. The database developed in this research represents the first ever assessment of bioenergy potential for New Jersey. It can serve as a foundation for future research and modifications that could increase its power as a more robust policy analysis tool. As such, the current database is not able to perform analysis of tradeoffs across broad policy objectives such as economic development vs. CO2 emissions, or energy independence vs. source reduction of solid waste. Instead, it operates one level below that with comparisons of kWh or GGE generated by different feedstock/technology combinations at the state and county level. Modification of the model to incorporate factors that will enable the analysis of broader energy policy issues as those mentioned above, are recommended for future research efforts.
Numerical Prediction of Chevron Nozzle Noise Reduction using Wind-MGBK Methodology
NASA Technical Reports Server (NTRS)
Engblom, W.A.; Bridges, J.; Khavarant, A.
2005-01-01
Numerical predictions for single-stream chevron nozzle flow performance and farfield noise production are presented. Reynolds Averaged Navier Stokes (RANS) solutions, produced via the WIND flow solver, are provided as input to the MGBK code for prediction of farfield noise distributions. This methodology is applied to a set of sensitivity cases involving varying degrees of chevron inward bend angle relative to the core flow, for both cold and hot exhaust conditions. The sensitivity study results illustrate the effect of increased chevron bend angle and exhaust temperature on enhancement of fine-scale mixing, initiation of core breakdown, nozzle performance, and noise reduction. Direct comparisons with experimental data, including stagnation pressure and temperature rake data, PIV turbulent kinetic energy fields, and 90 degree observer farfield microphone data are provided. Although some deficiencies in the numerical predictions are evident, the correct farfield noise spectra trends are captured by the WIND-MGBK method, including the noise reduction benefit of chevrons. Implications of these results to future chevron design efforts are addressed.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Jet noise and performance comparison study of a Mach 2.55 supersonic cruise aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.; Maglieri, D. J.
1979-01-01
Data provided by the manufacturer relating to noise and performance of a Mach 2.55 supersonic cruise concept employing a post 1985 technology level, variable cycle engine was used to identify differences in noise levels and performance between the manfacturer and NASA associated with methodology and groundrules. In addition, economic and noise information is provided consistent with a previous study based on an advanced technology Mach 2.7 configuration. The results indicate that the difference between the NASA's and manfacturer's performance methodology is small. Resizing the aircraft to NASA groundrules also results in small changes in flyover, sideline and approach noise levels. For the power setting chosen, engine oversizing resulted in no reduction in traded noise. In terms of summated noise level, a 10 EPNdB reduction is realized for an 8 percent increase in total operating costs. This corresponds to an average noise reduction of 3.3 EPNdB at the three observer positions.
NASA Astrophysics Data System (ADS)
Bailey, Bernard Charles
Increasing the optical range of target detection and recognition continues to be an area of great interest in the ocean environment. Light attenuation limits radiative and information transfer for image formation in water. These limitations are difficult to surmount in conventional underwater imaging system design. Methods for the formation of images in scattering media generally rely upon temporal or spatial methodologies. Some interesting designs have been developed in an attempt to circumvent or overcome the scattering problem. This document describes a variation of the spatial interferometric technique that relies upon projected spatial gratings with subsequent detection against a coherent return signal for the purpose of noise reduction and image enhancement. A model is developed that simulates the projected structured illumination through turbid water to a target and its return to a detector. The model shows an unstructured backscatter superimposed on a structured return signal. The model can predict the effect on received signal to noise of variations in the projected spatial frequency and turbidity. The model has been extended to predict what a camera would actually see so that various noise reduction schemes can be modeled. Finally, some water tank tests are presented validating original hypothesis and model predictions. The method is advantageous in not requiring temporal synchronization between reference and signal beams and may use a continuous illumination source. Spatial coherency of the beam allows detection of the direct return, while scattered light appears as a noncoherent noise term. Both model and illumination method should prove to be valuable tools in ocean research.
Formulation of a parametric systems design framework for disaster response planning
NASA Astrophysics Data System (ADS)
Mma, Stephanie Weiya
The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development capabilities, and combinations of these capabilities. For Shelby County and for Orleans Parish, the Response improvement plan reduced restoration time the most. For the combined capabilities, Shelby County experienced the greatest reduction in restoration time with the implementation of Development and Response capability improvements, and for Orleans Parish it was the Preparedness and Response capability improvements. Optimization of restoration time with community parameters was tested by using a Particle Swarm Optimization algorithm. Fifty different optimized restoration times were generated using the Particle Swarm Optimization algorithm and ranked using the Technique for Order Preference by Similarity to Ideal Solution. The optimization results indicate that the greatest reduction in restoration time for a community is achieved with a particular combination of different parameter values instead of the maximization of each parameter.
Humphries, Angela; Peden, Carol; Jordan, Lesley; Crowe, Josephine; Peden, Carol
2016-01-01
A significant incidence of post-procedural deep vein thrombosis (DVT) and pulmonary embolus (PE) was identified in patients undergoing surgery at our hospital. Investigation showed an unreliable peri-operative process leading to patients receiving incorrect or missed venous thromboembolism (VTE) prophylaxis. The Trust had previously participated in a project funded by the Health Foundation using the "Safer Clinical Systems" methodology to assess, diagnose, appraise options, and implement interventions to improve a high risk medication pathway. We applied the methodology from that study to this cohort of patients demonstrating that the same approach could be applied in a different context. Interventions were linked to the greatest hazards and risks identified during the diagnostic phase. This showed that many surgical elective patients had no VTE risk assessment completed pre-operatively, leading to missed or delayed doses of VTE prophylaxis post-operatively. Collaborative work with stakeholders led to the development of a new process to ensure completion of the VTE risk assessment prior to surgery, which was implemented using the Model for Improvement methodology. The process was supported by the inclusion of a VTE check in the Sign Out element of the WHO Surgical Safety Checklist at the end of surgery, which also ensured that appropriate prophylaxis was prescribed. A standardised operation note including the post-operative VTE plan will be implemented in the near future. At the end of the project VTE risk assessments were completed for 100% of elective surgical patients on admission, compared with 40% in the baseline data. Baseline data also revealed that processes for chemical and mechanical prophylaxis were not reliable. Hospital wide interventions included standardisation of mechanical prophylaxis devices and anti-thromboembolic stockings (resulting in a cost saving of £52,000), and a Trust wide awareness and education programme. The education included increased emphasis on use of mechanical prophylaxis when chemical prophylaxis was contraindicated. VTE guidelines were also included in the existing junior Doctor guideline App. and a "CLOTS" anticoagulation webpage was developed and published on the hospital intranet. The improvement in VTE processes resulted in an 80% reduction in hospital associated thrombosis following surgery from 0.2% in January 2014 to 0.04% in December 2015 and a reduction in the number of all hospital associated VTE from a baseline median of 9 per month as of January 2014 to a median of 1 per month by December 2015.
Lightwood, James; Glantz, Stanton A.
2013-01-01
Background Previous research has shown that tobacco control funding in California has reduced per capita cigarette consumption and per capita healthcare expenditures. This paper refines our earlier model by estimating the effect of California tobacco control funding on current smoking prevalence and cigarette consumption per smoker and the effect of prevalence and consumption on per capita healthcare expenditures. The results are used to calculate new estimates of the effect of the California Tobacco Program. Methodology/Principal Findings Using state-specific aggregate data, current smoking prevalence and cigarette consumption per smoker are modeled as functions of cumulative California and control states' per capita tobacco control funding, cigarette price, and per capita income. Per capita healthcare expenditures are modeled as a function of prevalence of current smoking, cigarette consumption per smoker, and per capita income. One additional dollar of cumulative per capita tobacco control funding is associated with reduction in current smoking prevalence of 0.0497 (SE.00347) percentage points and current smoker cigarette consumption of 1.39 (SE.132) packs per smoker per year. Reductions of one percentage point in current smoking prevalence and one pack smoked per smoker are associated with $35.4 (SE $9.85) and $3.14 (SE.786) reductions in per capita healthcare expenditure, respectively (2010 dollars), using the National Income and Product Accounts (NIPA) measure of healthcare spending. Conclusions/Significance Between FY 1989 and 2008 the California Tobacco Program cost $2.4 billion and led to cumulative NIPA healthcare expenditure savings of $134 (SE $30.5) billion. PMID:23418411
ERIC Educational Resources Information Center
Hallgren, Mats A.; Kallmen, Hakan; Leifman, Hakan; Sjolund, Torbjorn; Andreasson, Sven
2009-01-01
Purpose: The purpose of this paper is to evaluate the effectiveness of the PRIME for Life risk reduction program in reducing alcohol consumption and improving knowledge and attitudes towards alcohol use in male Swedish military conscripts, aged 18 to 22 years. Design/methodology/approach: A quasi-experimental design was used in which 1,371…
1993-01-01
H. Wegner for developing the tactical air and ground force databases and producing the campaign results. Thanks are also due to Group Captain Michael ... Jackson , RAF, for developing the evaluation criteria for NATO’s tactical air force reductions during his stay at RAND. -xi. CONTENTS PREFACE
Using Plate Finite Elements for Modeling Fillets in Design, Optimization, and Dynamic Analysis
NASA Technical Reports Server (NTRS)
Brown, A. M.; Seugling, R. M.
2003-01-01
A methodology has been developed that allows the use of plate elements instead of numerically inefficient solid elements for modeling structures with 90 degree fillets. The technique uses plate bridges with pseudo Young's modulus (Eb) and thickness (tb) values placed between the tangent points of the fillets. These parameters are obtained by solving two nonlinear simultaneous equations in terms of the independent variables rlt and twallt. These equations are generated by equating the rotation at the tangent point of a bridge system with that of a fillet, where both rotations are derived using beam theory. Accurate surface fits of the solutions are also presented to provide the user with closed-form equations for the parameters. The methodology was verified on the subcomponent level and with a representative filleted structure, where the technique yielded a plate model exhibiting a level of accuracy better than or equal to a high-fidelity solid model and with a 90-percent reduction in the number of DOFs. The application of this method for parametric design studies, optimization, and dynamic analysis should prove extremely beneficial for the finite element practitioner. Although the method does not attempt to produce accurate stresses in the filleted region, it can also be used to obtain stresses elsewhere in the structure for preliminary analysis. A future avenue of study is to extend the theory developed here to other fillet geometries, including fillet angles other than 90 and multifaceted intersections.
Coupling Post-Event and Prospective Analyses for El Niño-related Risk Reduction in Peru
NASA Astrophysics Data System (ADS)
French, Adam; Keating, Adriana; Mechler, Reinhard; Szoenyi, Michael; Cisneros, Abel; Chuquisengo, Orlando; Etienne, Emilie; Ferradas, Pedro
2017-04-01
Analyses in the wake of natural disasters play an important role in identifying how ex ante risk reduction and ex post hazard response activities have both succeeded and fallen short in specific contexts, thereby contributing to recommendations for improving such measures in the future. Event analyses have particular relevance in settings where disasters are likely to reoccur, and especially where recurrence intervals are short. This paper applies the Post Event Review Capability (PERC) methodology to the context of frequently reoccurring El Niño Southern Oscillation (ENSO) events in the country of Peru, where over the last several decades ENSO impacts have generated high levels of damage and economic loss. Rather than analyzing the impacts of a single event, this study builds upon the existing PERC methodology by combining empirical event analysis with a critical examination of risk reduction and adaptation measures implemented both prior to and following several ENSO events in the late 20th and early 21st centuries. Additionally, the paper explores linking the empirical findings regarding the uptake and outcomes of particular risk reduction and adaptation strategies to a prospective, scenario-based approach for projecting risk several decades into the future.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Robust Feedback Control of Flow Induced Structural Radiation of Sound
NASA Technical Reports Server (NTRS)
Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.
1997-01-01
A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Therkelsen, Peter L.; Rao, Prakash; Aghajanzadeh, Arian
ISO 50001-Energy management systems – Requirements with guidance for use, is an internationally developed standard that provides organizations with a flexible framework for implementing an energy management system (EnMS) with the goal of continual energy performance improvement. The ISO 50001 standard was first published in 2011 and has since seen growth in the number of certificates issued around the world, primarily in the industrial (agriculture, manufacturing, and mining) and service (commercial) sectors. Policy makers in many regions and countries are looking to or are already using ISO 50001 as a basis for energy efficiency, carbon reduction, and other energy performancemore » improvement schemes. The Impact Estimator Tool 50001 (IET 50001 Tool) is a computational model developed to assist researchers and policy makers determine the potential impact of ISO 50001 implementation in the industrial and service (commercial) sectors for a given region or country. The IET 50001 Tool is based upon a methodology initially developed by the Lawrence Berkeley National Laboratory that has been improved upon and vetted by a group of international researchers. By using a commonly accepted and transparent methodology, users of the IET 50001 Tool can easily and clearly communicate the potential impact of ISO 50001 for a region or country.« less
NASA Astrophysics Data System (ADS)
Marti, Joan; Bartolini, Stefania; Becerril, Laura
2016-04-01
VeTOOLS is a project funded by the European Commission's Humanitarian Aid and Civil Protection department (ECHO), and aims at creating an integrated software platform specially designed to assess and manage volcanic risk. The project facilitates interaction and cooperation between scientists and Civil Protection Agencies in order to share, unify, and exchange procedures, methodologies and technologies to effectively reduce the impacts of volcanic disasters. The project aims at 1) improving and developing volcanic risk assessment and management capacities in active volcanic regions; 2) developing universal methodologies, scenario definitions, response strategies and alert protocols to cope with the full range of volcanic threats; 4) improving quantitative methods and tools for vulnerability and risk assessment; and 5) defining thresholds and protocols for civil protection. With these objectives, the VeTOOLS project points to two of the Sendai Framework resolutions for implementing it: i) Provide guidance on methodologies and standards for risk assessments, disaster risk modelling and the use of data; ii) Promote and support the availability and application of science and technology to decision-making, and offers a good example on how a close collaboration between science and civil protection is an effective way to contribute to DRR. European Commission ECHO Grant SI2.695524
Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions
NASA Astrophysics Data System (ADS)
De Risi, Raffaele; Goda, Katsuichiro
2017-08-01
Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.
Zareski, Rubin; Kapedanovska Nestorovska, A; Grozdanova, A; Dimitrova, B; Suturkova, L J; Sterjev, Z
2016-09-01
The introduction of a new methodology for the pricing of drugs by the Agency of Medicines of the Republic of Macedonia for the period 2012 to 2015 resulted in a price reduction of 1386 drugs. This pioneer study evaluated the effects of the price changes during this period of 4 years and the consequent effects on the sale quantities for the segmented Anatomical Therapeutic Chemical groups. The drugs were grouped by the size of the reductions, by segmenting the drugs by generic names, and by the Anatomical Therapeutic Chemical classification, in which the quantities are grouped by generic names and the prices are calculated by average values for a period of 1 year. Analysis of the relations between price changes and quantities sold showed that since the introduction of the new methodology the decrease in the prices pushed down the sales of the drugs. This article presents not only the market developments but also projects the tendencies, concluding clearly that focusing only on the price reduction of drugs and not on the implementation of the pharmacoeconomic studies is deviating the supply of drugs that are on the market and affecting their quality. The trends indicate that patients are using old-generation drugs, packaging forms that do not fully answer the market demand, and policies that significantly affect the suppliers. The presented analysis confirms that if the new methodology is only partially implemented and is not followed in full consideration of the pharmacoeconomic studies, negative consequences will also have an impact on regional pharmaceutical markets, which are benchmarking prices of drugs with the Macedonian market. Copyright © 2016. Published by Elsevier Inc.
Langwig, Kate E; Wargo, Andrew R; Jones, Darbi R; Viss, Jessie R; Rutan, Barbara J; Egan, Nicholas A; Sá-Guimarães, Pedro; Kim, Min Sun; Kurath, Gael; Gomes, M Gabriela M; Lipsitch, Marc
2017-11-21
Heterogeneity in host susceptibility is a key determinant of infectious disease dynamics but is rarely accounted for in assessment of disease control measures. Understanding how susceptibility is distributed in populations, and how control measures change this distribution, is integral to predicting the course of epidemics with and without interventions. Using multiple experimental and modeling approaches, we show that rainbow trout have relatively homogeneous susceptibility to infection with infectious hematopoietic necrosis virus and that vaccination increases heterogeneity in susceptibility in a nearly all-or-nothing fashion. In a simple transmission model with an R 0 of 2, the highly heterogeneous vaccine protection would cause a 35 percentage-point reduction in outbreak size over an intervention inducing homogenous protection at the same mean level. More broadly, these findings provide validation of methodology that can help to reduce biases in predictions of vaccine impact in natural settings and provide insight into how vaccination shapes population susceptibility. IMPORTANCE Differences among individuals influence transmission and spread of infectious diseases as well as the effectiveness of control measures. Control measures, such as vaccines, may provide leaky protection, protecting all hosts to an identical degree, or all-or-nothing protection, protecting some hosts completely while leaving others completely unprotected. This distinction can have a dramatic influence on disease dynamics, yet this distribution of protection is frequently unaccounted for in epidemiological models and estimates of vaccine efficacy. Here, we apply new methodology to experimentally examine host heterogeneity in susceptibility and mode of vaccine action as distinct components influencing disease outcome. Through multiple experiments and new modeling approaches, we show that the distribution of vaccine effects can be robustly estimated. These results offer new experimental and inferential methodology that can improve predictions of vaccine effectiveness and have broad applicability to human, wildlife, and ecosystem health. Copyright © 2017 Langwig et al.
Molnár, Sándor; López, Inmaculada; Gámez, Manuel; Garay, József
2016-03-01
The paper is aimed at a methodological development in biological pest control. The considered one pest two-agent system is modelled as a verticum-type system. Originally, linear verticum-type systems were introduced by one of the authors for modelling certain industrial systems. These systems are hierarchically composed of linear subsystems such that a part of the state variables of each subsystem affect the dynamics of the next subsystem. Recently, verticum-type system models have been applied to population ecology as well, which required the extension of the concept a verticum-type system to the nonlinear case. In the present paper the general concepts and technics of nonlinear verticum-type control systems are used to obtain biological control strategies in a two-agent system. For the illustration of this verticum-type control, these tools of mathematical systems theory are applied to a dynamic model of interactions between the egg and larvae populations of the sugarcane borer (Diatraea saccharalis) and its parasitoids: the egg parasitoid Trichogramma galloi and the larvae parasitoid Cotesia flavipes. In this application a key role is played by the concept of controllability, which means that it is possible to steer the system to an equilibrium in given time. In addition to a usual linearization, the basic idea is a decomposition of the control of the whole system into the control of the subsystems, making use of the verticum structure of the population system. The main aim of this study is to show several advantages of the verticum (or decomposition) approach over the classical control theoretical model (without decomposition). For example, in the case of verticum control the pest larval density decreases below the critical threshold value much quicker than without decomposition. Furthermore, it is also shown that the verticum approach may be better even in terms of cost effectiveness. The presented optimal control methodology also turned out to be an efficient tool for the "in silico" analysis of the cost-effectiveness of different biocontrol strategies, e.g. by answering the question how far it is cost-effective to speed up the reduction of the pest larvae density, or along which trajectory this reduction should be carried out. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ayatollahy Tafti, Tayeb
We develop a new method for integrating information and data from different sources. We also construct a comprehensive workflow for characterizing and modeling a fracture network in unconventional reservoirs, using microseismic data. The methodology is based on combination of several mathematical and artificial intelligent techniques, including geostatistics, fractal analysis, fuzzy logic, and neural networks. The study contributes to scholarly knowledge base on the characterization and modeling fractured reservoirs in several ways; including a versatile workflow with a novel objective functions. Some the characteristics of the methods are listed below: 1. The new method is an effective fracture characterization procedure estimates different fracture properties. Unlike the existing methods, the new approach is not dependent on the location of events. It is able to integrate all multi-scaled and diverse fracture information from different methodologies. 2. It offers an improved procedure to create compressional and shear velocity models as a preamble for delineating anomalies and map structures of interest and to correlate velocity anomalies with fracture swarms and other reservoir properties of interest. 3. It offers an effective way to obtain the fractal dimension of microseismic events and identify the pattern complexity, connectivity, and mechanism of the created fracture network. 4. It offers an innovative method for monitoring the fracture movement in different stages of stimulation that can be used to optimize the process. 5. Our newly developed MDFN approach allows to create a discrete fracture network model using only microseismic data with potential cost reduction. It also imposes fractal dimension as a constraint on other fracture modeling approaches, which increases the visual similarity between the modeled networks and the real network over the simulated volume.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Gottwald, James A.; Bliss, Donald B.
1990-01-01
The focus is on a noise control method which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. An interior noise reduction called alternate resonance tuning (ART) is described both theoretically and experimentally. Problems dealing with tuning single paneled wall structures for optimum noise reduction using the ART methodology are presented, and three theoretical problems are analyzed. The first analysis is a three dimensional, full acoustic solution for tuning a panel wall composed of repeating sections with four different panel tunings within that section, where the panels are modeled as idealized spring-mass-damper systems. The second analysis is a two dimensional, full acoustic solution for a panel geometry influenced by the effect of a propagating external pressure field such as that which might be associated with propeller passage by a fuselage. To reduce the analysis complexity, idealized spring-mass-damper panels are again employed. The final theoretical analysis presents the general four panel problem with real panel sections, where the effect of higher structural modes is discussed. Results from an experimental program highlight real applications of the ART concept and show the effectiveness of the tuning on real structures.
Bolanča, Tomislav; Strahovnik, Tomislav; Ukić, Šime; Stankov, Mirjana Novak; Rogošić, Marko
2017-07-01
This study describes the development of tool for testing different policies for reduction of greenhouse gas (GHG) emissions in energy sector using artificial neural networks (ANNs). The case study of Croatia was elaborated. Two different energy consumption scenarios were used as a base for calculations and predictions of GHG emissions: the business as usual (BAU) scenario and sustainable scenario. Both of them are based on predicted energy consumption using different growth rates; the growth rates within the second scenario resulted from the implementation of corresponding energy efficiency measures in final energy consumption and increasing share of renewable energy sources. Both ANN architecture and training methodology were optimized to produce network that was able to successfully describe the existing data and to achieve reliable prediction of emissions in a forward time sense. The BAU scenario was found to produce continuously increasing emissions of all GHGs. The sustainable scenario was found to decrease the GHG emission levels of all gases with respect to BAU. The observed decrease was attributed to the group of measures termed the reduction of final energy consumption through energy efficiency measures.
VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.
Little, Todd D; Wang, Eugene W; Gorrall, Britt K
2017-06-01
This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.
A hybrid algorithm for coupling partial differential equation and compartment-based dynamics.
Harrison, Jonathan U; Yates, Christian A
2016-09-01
Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction-diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. © 2016 The Authors.
A hybrid algorithm for coupling partial differential equation and compartment-based dynamics
Yates, Christian A.
2016-01-01
Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction–diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. PMID:27628171
Polcin, Douglas L
Communities throughout the U.S. are struggling to find solutions for serious and persistent homelessness. Alcohol and drug problems can be causes and consequences of homelessness, as well as co-occurring problems that complicate efforts to succeed in finding stable housing. Two prominent service models exist, one known as "Housing First" takes a harm reduction approach and the other known as the "linear" model typically supports a goal of abstinence from alcohol and drugs. Despite their popularity, the research supporting these models suffers from methodological problems and inconsistent findings. One purpose of this paper is to describe systematic reviews of the homelessness services literature, which illustrate weaknesses in research designs and inconsistent conclusions about the effectiveness of current models. Problems among some of the seminal studies on homelessness include poorly defined inclusion and exclusion criteria, inadequate measures of alcohol and drug use, unspecified or poorly implemented comparison conditions, and lack of procedures documenting adherence to service models. Several recent papers have suggested broader based approaches for homeless services that integrate alternatives and respond better to consumer needs. Practical considerations for implementing a broader system of services are described and peer managed recovery homes are presented as examples of services that address some of the gaps in current approaches. Three issues are identified that need more attention from researchers: 1) improving upon the methodological limitations in current studies, 2) assessing the impact of broader based, integrated services on outcome, and 3) assessing approaches to the service needs of homeless persons involved in the criminal justice system.
Polcin, Douglas L.
2016-01-01
Abstract Communities throughout the U.S. are struggling to find solutions for serious and persistent homelessness. Alcohol and drug problems can be causes and consequences of homelessness, as well as co-occurring problems that complicate efforts to succeed in finding stable housing. Two prominent service models exist, one known as “Housing First” takes a harm reduction approach and the other known as the “linear” model typically supports a goal of abstinence from alcohol and drugs. Despite their popularity, the research supporting these models suffers from methodological problems and inconsistent findings. One purpose of this paper is to describe systematic reviews of the homelessness services literature, which illustrate weaknesses in research designs and inconsistent conclusions about the effectiveness of current models. Problems among some of the seminal studies on homelessness include poorly defined inclusion and exclusion criteria, inadequate measures of alcohol and drug use, unspecified or poorly implemented comparison conditions, and lack of procedures documenting adherence to service models. Several recent papers have suggested broader based approaches for homeless services that integrate alternatives and respond better to consumer needs. Practical considerations for implementing a broader system of services are described and peer-managed recovery homes are presented as examples of services that address some of the gaps in current approaches. Three issues are identified that need more attention from researchers: (1) improving upon the methodological limitations in current studies, (2) assessing the impact of broader based, integrated services on outcome, and (3) assessing approaches to the service needs of homeless persons involved in the criminal justice system. PMID:27092027
Reference Model 5 (RM5): Oscillating Surge Wave Energy Converter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y. H.; Jenne, D. S.; Thresher, R.
This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter (OSWEC) reference model design in a complementary manner to Reference Models 1-4 contained in the above report. A conceptual design for a taut moored oscillating surge wave energy converter was developed. The design had an annual electrical power of 108 kilowatts (kW), rated power of 360 kW, and intended deployment at water depths between 50 m and 100 m. The study includes structural analysis, power output estimation, a hydraulic power conversionmore » chain system, and mooring designs. The results were used to estimate device capital cost and annual operation and maintenance costs. The device performance and costs were used for the economic analysis, following the methodology presented in SAND2013-9040 that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays up to 100 devices. The levelized cost of energy estimated for the Reference Model 5 OSWEC, presented in this report, was for a single device and arrays of 10, 50, and 100 units, and it enabled the economic analysis to account for cost reductions associated with economies of scale. The baseline commercial levelized cost of energy estimate for the Reference Model 5 device in an array comprised of 10 units is $1.44/kilowatt-hour (kWh), and the value drops to approximately $0.69/kWh for an array of 100 units.« less
NASA Astrophysics Data System (ADS)
Belgasam, Tarek M.; Zbib, Hussein M.
2017-12-01
Dual-phase (DP) steels have received widespread attention for their low density and high strength. This low density is of value to the automotive industry for the weight reduction it offers and the attendant fuel savings and emission reductions. Recent studies on developing DP steels showed that the combination of strength/ductility could be significantly improved when changing the volume fraction and grain size of phases in the microstructure depending on microstructure properties. Consequently, DP steel manufacturers are interested in predicting microstructure properties and in optimizing microstructure design. In this work, a microstructure-based approach using representative volume elements (RVEs) was developed. The approach examined the flow behavior of DP steels using virtual tension tests with an RVE to identify specific mechanical properties. Microstructures with varied martensite and ferrite grain sizes, martensite volume fractions, carbon content, and morphologies were studied in 3D RVE approaches. The effect of these microstructure parameters on a combination of strength/ductility of DP steels was examined numerically using the finite element method by implementing a dislocation density-based elastic-plastic constitutive model, and a Response surface methodology to determine the optimum conditions for a required combination of strength/ductility. The results from the numerical simulations are compared with experimental results found in the literature. The developed methodology proves to be a powerful tool for studying the effect and interaction of key microstructural parameters on strength and ductility and thus can be used to identify optimum microstructural conditions.
Micromechanics Fatigue Damage Analysis Modeling for Fabric Reinforced Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Min, J. B.; Xue, D.; Shi, Y.
2013-01-01
A micromechanics analysis modeling method was developed to analyze the damage progression and fatigue failure of fabric reinforced composite structures, especially for the brittle ceramic matrix material composites. A repeating unit cell concept of fabric reinforced composites was used to represent the global composite structure. The thermal and mechanical properties of the repeating unit cell were considered as the same as those of the global composite structure. The three-phase micromechanics, the shear-lag, and the continuum fracture mechanics models were integrated with a statistical model in the repeating unit cell to predict the progressive damages and fatigue life of the composite structures. The global structure failure was defined as the loss of loading capability of the repeating unit cell, which depends on the stiffness reduction due to material slice failures and nonlinear material properties in the repeating unit cell. The present methodology is demonstrated with the analysis results evaluated through the experimental test performed with carbon fiber reinforced silicon carbide matrix plain weave composite specimens.
Probabilistic analysis for fatigue strength degradation of materials
NASA Technical Reports Server (NTRS)
Royce, Lola
1989-01-01
This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.
Kelly, Elizabeth W; Kelly, Jonathan D; Hiestand, Brian; Wells-Kiser, Kathy; Starling, Stephanie; Hoekstra, James W
2010-01-01
Rapid reperfusion in patients with ST-elevation myocardial infarction (STEMI) is associated with lower mortality. Reduction in door-to-balloon (D2B) time for percutaneous coronary intervention requires multidisciplinary cooperation, process analysis, and quality improvement methodology. Six Sigma methodology was used to reduce D2B times in STEMI patients presenting to a tertiary care center. Specific steps in STEMI care were determined, time goals were established, and processes were changed to reduce each step's duration. Outcomes were tracked, and timely feedback was given to providers. After process analysis and implementation of improvements, mean D2B times decreased from 128 to 90 minutes. Improvement has been sustained; as of June 2010, the mean D2B was 56 minutes, with 100% of patients meeting the 90-minute window for the year. Six Sigma methodology and immediate provider feedback result in significant reductions in D2B times. The lessons learned may be extrapolated to other primary percutaneous coronary intervention centers. Copyright © 2010 Elsevier Inc. All rights reserved.
Donovan, Elizabeth A; Manta, Christine J; Goldsack, Jennifer C; Collins, Michelle L
2016-01-01
Under value-based purchasing, Medicare withholds reimbursements for hospital-acquired pressure ulcer occurrence and rewards hospitals that meet performance standards. With little evidence of a validated prevention process, nurse managers are challenged to find evidence-based interventions. The aim of this study was to reduce the unit-acquired pressure ulcer (UAPU) rate on targeted intensive care and step-down units by 15% using Lean Six Sigma (LSS) methodology. An interdisciplinary team designed a pilot program using LSS methodology to test 4 interventions: standardized documentation, equipment monitoring, patient out-of-bed-to-chair monitoring, and a rounding checklist. During the pilot, the UAPU rate decreased from 4.4% to 2.8%, exceeding the goal of a 15% reduction. The rate remained below the goal through the program control phase at 2.9%, demonstrating a statistically significant reduction after intervention implementation. The program significantly reduced UAPU rates in high-risk populations. LSS methodologies are a sustainable approach to reducing hospital-acquired conditions that should be broadly tested and implemented.
NASA Astrophysics Data System (ADS)
Papathoma-Köhle, Maria
2016-08-01
The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analysing and, if possible, quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for its reduction. The most common methods for assessing physical vulnerability are vulnerability matrices, vulnerability curves and vulnerability indicators; however, in most of the cases, these methods are used in a conflicting way rather than in combination. The article focuses on two of these methods: vulnerability curves and vulnerability indicators. Vulnerability curves express physical vulnerability as a function of the intensity of the process and the degree of loss, considering, in individual cases only, some structural characteristics of the affected buildings. However, a considerable amount of studies argue that vulnerability assessment should focus on the identification of these variables that influence the vulnerability of an element at risk (vulnerability indicators). In this study, an indicator-based methodology (IBM) for mountain hazards including debris flow (Kappes et al., 2012) is applied to a case study for debris flows in South Tyrol, where in the past a vulnerability curve has been developed. The relatively "new" indicator-based method is being scrutinised and recommendations for its improvement are outlined. The comparison of the two methodological approaches and their results is challenging since both methodological approaches deal with vulnerability in a different way. However, it is still possible to highlight their weaknesses and strengths, show clearly that both methodologies are necessary for the assessment of physical vulnerability and provide a preliminary "holistic methodological framework" for physical vulnerability assessment showing how the two approaches may be used in combination in the future.
From fatalism to resilience: reducing disaster impacts through systematic investments.
Hill, Harvey; Wiener, John; Warner, Koko
2012-04-01
This paper describes a method for reducing the economic risks associated with predictable natural hazards by enhancing the resilience of national infrastructure systems. The three-step generalised framework is described along with examples. Step one establishes economic baseline growth without the disaster impact. Step two characterises economic growth constrained by a disaster. Step three assesses the economy's resilience to the disaster event when it is buffered by alternative resiliency investments. The successful outcome of step three is a disaster-resistant core of infrastructure systems and social capacity more able to maintain the national economy and development post disaster. In addition, the paper considers ways to achieve this goal in data-limited environments. The method provides a methodology to address this challenge via the integration of physical and social data of different spatial scales into macroeconomic models. This supports the disaster risk reduction objectives of governments, donor agencies, and the United Nations International Strategy for Disaster Reduction. © 2012 The Author(s). Disasters © Overseas Development Institute, 2012.
A quality improvement initiative to reduce necrotizing enterocolitis across hospital systems.
Nathan, Amy T; Ward, Laura; Schibler, Kurt; Moyer, Laurel; South, Andrew; Kaplan, Heather C
2018-04-20
Necrotizing enterocolitis (NEC) is a devastating intestinal disease in premature infants. Local rates of NEC were unacceptably high. We hypothesized that utilizing quality improvement methodology to standardize care and apply evidence-based practices would reduce our rate of NEC. A multidisciplinary team used the model for improvement to prioritize interventions. Three neonatal intensive care units (NICUs) developed a standardized feeding protocol for very low birth weight (VLBW) infants, and employed strategies to increase the use of human milk, maximize intestinal perfusion, and promote a healthy microbiome. The primary outcome measure, NEC in VLBW infants, decreased from 0.17 cases/100 VLBW patient days to 0.029, an 83% reduction, while the compliance with a standardized feeding protocol improved. Through reliable implementation of evidence-based practices, this project reduced the regional rate of NEC by 83%. A key outcome and primary driver of success was standardization across multiple NICUs, resulting in consistent application of best practices and reduction in variation.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Preliminary noise tradeoff study of a Mach 2.7 cruise aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.; Maglieri, D. J. (Editor); Raney, J. P. (Editor)
1979-01-01
NASA computer codes in the areas of preliminary sizing and enroute performance, takeoff and landing performance, aircraft noise prediction, and economics were used in a preliminary noise tradeoff study for a Mach 2.7 design supersonic cruise concept. Aerodynamic configuration data were based on wind-tunnel model tests and related analyses. Aircraft structural characteristics and weight were based on advanced structural design methodologies, assuming conventional titanium technology. The most advanced noise prediction techniques available were used, and aircraft operating costs were estimated using accepted industry methods. The 4-engines cycles included in the study were based on assumed 1985 technology levels. Propulsion data was provided by aircraft manufacturers. Additional empirical data is needed to define both noise reduction features and other operating characteristics of all engine cycles under study. Data on VCE design parameters, coannular nozzle inverted flow noise reduction and advanced mechanical suppressors are urgently needed to reduce the present uncertainties in studies of this type.
Modeling of Alkane Oxidation Using Constituents and Species
NASA Technical Reports Server (NTRS)
Bellan, Jasette; Harstad, Kenneth G.
2010-01-01
It is currently not possible to perform simulations of turbulent reactive flows due in particular to complex chemistry, which may contain thousands of reactions and hundreds of species. This complex chemistry results in additional differential equations, making the numerical solution of the equation set computationally prohibitive. Reducing the chemical kinetics mathematical description is one of several important goals in turbulent reactive flow modeling. A chemical kinetics reduction model is proposed for alkane oxidation in air that is based on a parallel methodology to that used in turbulence modeling in the context of the Large Eddy Simulation. The objective of kinetic modeling is to predict the heat release and temperature evolution. This kinetic mechanism is valid over a pressure range from atmospheric to 60 bar, temperatures from 600 K to 2,500 K, and equivalence ratios from 0.125 to 8. This range encompasses diesel, HCCI, and gas-turbine engines, including cold ignition. A computationally efficient kinetic reduction has been proposed for alkanes that has been illustrated for n-heptane using the LLNL heptane mechanism. This model is consistent with turbulence modeling in that scales were first categorized into either those modeled or those computed as progress variables. Species were identified as being either light or heavy. The heavy species were decomposed into defined 13 constituents, and their total molar density was shown to evolve in a quasi-steady manner. The light species behave either in a quasi-steady or unsteady manner. The modeled scales are the total constituent molar density, Nc, and the molar density of the quasi-steady light species. The progress variables are the total constituent molar density rate evolution and the molar densities of the unsteady light species. The unsteady equations for the light species contain contributions of the type gain/loss rates from the heavy species that are modeled consistent with the developed mathematical forms for the total constituent molar density rate evolution; indeed, examination of these gain/loss rates shows that they also have a good quasi-steady behavior with a functional form resembling that of the constituent rate. This finding highlights the fact that the fitting technique provides a methodology that can be repeatedly used to obtain an accurate representation of full or skeletal kinetic models. Assuming success with the modified reduced model, the advantage of the modeling approach is clear. Because this model is based on the Nc rate rather than on that of individual heavy species, even if the number of species increases with increased carbon number in the alkane group, providing that the quasi-steady rate aspect persists, then extension of this model to higher alkanes should be conceptually straightforward, although it remains to be seen if the functional fits would remain valid or would require reconstruction.
Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson
2012-01-01
This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...
ERIC Educational Resources Information Center
Roche, Jose Manuel
2013-01-01
Important steps have been taken at international summits to set up goals and targets to improve the wellbeing of children worldwide. Now the world also has more and better data to monitor progress. This paper presents a new approach to monitoring progress in child poverty reduction based on the Alkire and Foster adjusted headcount ratio and an…
Archetype modeling methodology.
Moner, David; Maldonado, José Alberto; Robles, Montserrat
2018-03-01
Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.
77 FR 64808 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-23
..., special studies, or methodological studies. The average burden for these special study/pretest respondents... is requested. NHANES programs produce descriptive statistics which measure the health and nutrition...
Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems
2015-12-01
distribution is unlimited IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE SYSTEMS by Sang M. Sok December 2015...REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE...importance of the CoM. The improved conceptual model methodology (ICoMM) is developed in support of improving the structure of the CoM for both face and
Russo, Lucia; Russo, Paola; Siettos, Constantinos I.
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire. PMID:27780249
Bendeck, Murielle; Serrano-Blanco, Antoni; García-Alonso, Carlos; Bonet, Pere; Jordà, Esther; Sabes-Figuera, Ramon; Salvador-Carulla, Luis
2013-04-01
Cost of illness (COI) studies are carried out under conditions of uncertainty and with incomplete information. There are concerns regarding their generalisability, accuracy and usability in evidence-informed care. A hybrid methodology is used to estimate the regional costs of depression in Catalonia (Spain) following an integrative approach. The cross-design synthesis included nominal groups and quantitative analysis of both top-down and bottom-up studies, and incorporated primary and secondary data from different sources of information in Catalonia. Sensitivity analysis used probabilistic Monte Carlo simulation modelling. A dissemination strategy was planned, including a standard form adapted from cost-effectiveness studies to summarise methods and results. The method used allows for a comprehensive estimate of the cost of depression in Catalonia. Health officers and decision-makers concluded that this methodology provided useful information and knowledge for evidence-informed planning in mental health. The mix of methods, combined with a simulation model, contributed to a reduction in data gaps and, in conditions of uncertainty, supplied more complete information on the costs of depression in Catalonia. This approach to COI should be differentiated from other COI designs to allow like-with-like comparisons. A consensus on COI typology, procedures and dissemination is needed.
Russo, Lucia; Russo, Paola; Siettos, Constantinos I
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
Mise en oeuvre et caracterisation d'une methode d'injection de pannes a haut niveau d'abstraction
NASA Astrophysics Data System (ADS)
Robache, Remi
Nowadays, the effects of cosmic rays on electronics are well known. Different studies have demonstrated that neutrons are the main cause of non-destructive errors in embedded circuits on airplanes. Moreover, the reduction of transistor sizes is making all circuits more sensitive to those effects. Radiation tolerant circuits are sometimes used in order to improve the robustness of circuits. However, those circuits are expensive and their technologies often lag a few generations behind compared to non-tolerant circuits. Designers prefer to use conventional circuits with mitigation techniques to improve the tolerance to soft errors. It is necessary to analyse and verify the dependability of a circuit throughout its design process. Conventional design methodologies need to be adapted in order to evaluate the tolerance to non-destructive errors caused by radiations. Nowadays, designers need new tools and new methodologies to validate their mitigation strategies if they are to meet system requirements. In this thesis, we are proposing a new methodology allowing to capture the faulty behavior of a circuit at a low level of abstraction and to apply it at a higher level. In order to do that, we are introducing the new concept of faulty behavior Signatures that allows creating, at a high level of abstraction (system level) models that reflect with high fidelity the faulty behavior of a circuit learned at a low level of abstraction, at gate level. We successfully replicated the faulty behavior of an 8 bit adder and multiplier with Simulink, with respectively a correlation coefficient of 98.53% and 99.86%. We are proposing a methodology that permits to generate a library of faulty components, with Simulink, allowing designers to verify the dependability of their models early in the design flow. We are presenting and analyzing our results obtained for three different circuits throughout this thesis. Within the framework of this project a paper was published at the NEWCAS 2013 conference (Robache et al., 2013). This works presents the new concept of faulty behavior Signature, the methodology for generating Signatures we developed and also our experiments with an 8bit multiplier.
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
NASA Astrophysics Data System (ADS)
Böttcher, J.; Jahn, M.; Tatzko, S.
2017-12-01
Pseudoelastic shape memory alloys exhibit a stress-induced phase transformation which leads to high strains during deformation of the material. The stress-strain characteristic during this thermomechanical process is hysteretic and results in the conversion of mechanical energy into thermal energy. This energy conversion allows for the use of shape memory alloys in vibration reduction. For the application of shape memory alloys as vibration damping devices a dynamic modeling of the material behavior is necessary. In this context experimentally determined material parameters which accurately represent the material behavior are essential for a reliable material model. Subject of this publication is the declaration of suitable material parameters for pseudoelastic shape memory alloys and the methodology of their identification from experimental investigations. The used test rig was specifically designed for the characterization of pseudoelastic shape memory alloys.
Electrothermal DC characterization of GaN on Si MOS-HEMTs
NASA Astrophysics Data System (ADS)
Rodríguez, R.; González, B.; García, J.; Núñez, A.
2017-11-01
DC characteristics of AlGaN/GaN on Si single finger MOS-HEMTs, for different gate geometries, have been measured and numerically simulated with substrate temperatures up to 150 °C. Defect density, depending on gate width, and thermal resistance, depending additionally on temperature, are extracted from transfer characteristics displacement and the AC output conductance method, respectively, and modeled for numerical simulations with Atlas. The thermal conductivity degradation in thin films is also included for accurate simulation of the heating response. With an appropriate methodology, the internal model parameters for temperature dependencies have been established. The numerical simulations show a relative error lower than 4.6% overall, for drain current and channel temperature behavior, and account for the measured device temperature decrease with the channel length increase as well as with the channel width reduction, for a set bias.
Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E
2014-09-01
This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Thunis, P; Degraeuwe, B; Pisoni, E; Meleux, F; Clappier, A
2017-01-01
Regional and local authorities have the obligation to design air quality plans and assess their impacts when concentration levels exceed the limit values. Because these limit values cover both short- (day) and long-term (year) effects, air quality plans also follow these two formats. In this work, we propose a methodology to analyze modeled air quality forecast results, looking at emission reduction for different sectors (residential, transport, agriculture, etc.) with the aim of supporting policy makers in assessing the impact of short-term action plans. Regarding PM 10 , results highlight the diversity of responses across European cities, in terms of magnitude and type that raises the necessity of designing area-specific air quality plans. Action plans extended from 1 to 3 days (i.e., emissions reductions applied for 24 and 72 h, respectively) point to the added value of trans-city coordinated actions. The largest benefits are seen in central Europe (Vienna, Prague) while major cities (e.g., Paris) already solve a large part of the problem on their own. Eastern Europe would particularly benefit from plans based on emission reduction in the residential sectors; while in northern cities, agriculture seems to be the key sector on which to focus attention. Transport is playing a key role in most cities whereas the impact of industry is limited to a few cities in south-eastern Europe. For NO 2 , short-term action plans focusing on traffic emission reductions are efficient in all cities. This is due to the local character of this type of pollution. It is important, however, to stress that these results remain dependent on the selected months available for this study.
Amperometric, Bipotentiometric, and Coulometric Titration.
ERIC Educational Resources Information Center
Stock, John T.
1980-01-01
Discusses recent review articles in various kinds of titration. Also discusses new research in apparatus and methodology, acid-base reactions, precipitation and complexing reactions, oxidation-reduction reactions, and nomenclature. Cites 338 references. (CS)
NASA Astrophysics Data System (ADS)
Unfried-Silgado, Jimy; Ramirez, Antonio J.
2014-03-01
This work aims the numerical modeling and characterization of as-welded microstructure of Ni-Cr-Fe alloys with additions of Nb, Mo and Hf as a key to understand their proven resistance to ductility-dip cracking. Part I deals with as-welded structure modeling, using experimental alloying ranges and Calphad methodology. Model calculates kinetic phase transformations and partitioning of elements during weld solidification using a cooling rate of 100 K.s-1, considering their consequences on solidification mode for each alloy. Calculated structures were compared with experimental observations on as-welded structures, exhibiting good agreement. Numerical calculations estimate an increase by three times of mass fraction of primary carbides precipitation, a substantial reduction of mass fraction of M23C6 precipitates and topologically closed packed phases (TCP), a homogeneously intradendritic distribution, and a slight increase of interdendritic Molybdenum distribution in these alloys. Incidences of metallurgical characteristics of modeled as-welded structures on desirable characteristics of Ni-based alloys resistant to DDC are discussed here.
Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava
2012-03-01
Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Hogrefe, Christian; Isukapalli, Sastry S.; Tang, Xiaogang; Georgopoulos, Panos G.; He, Shan; Zalewsky, Eric E.; Hao, Winston; Ku, Jia-Yeong; Key, Tonalee; Sistla, Gopal
2011-01-01
The role of emissions of volatile organic compounds and nitric oxide from biogenic sources is becoming increasingly important in regulatory air quality modeling as levels of anthropogenic emissions continue to decrease and stricter health-based air quality standards are being adopted. However, considerable uncertainties still exist in the current estimation methodologies for biogenic emissions. The impact of these uncertainties on ozone and fine particulate matter (PM2.5) levels for the eastern United States was studied, focusing on biogenic emissions estimates from two commonly used biogenic emission models, the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the Biogenic Emissions Inventory System (BEIS). Photochemical grid modeling simulations were performed for two scenarios: one reflecting present day conditions and the other reflecting a hypothetical future year with reductions in emissions of anthropogenic oxides of nitrogen (NOx). For ozone, the use of MEGAN emissions resulted in a higher ozone response to hypothetical anthropogenic NOx emission reductions compared with BEIS. Applying the current U.S. Environmental Protection Agency guidance on regulatory air quality modeling in conjunction with typical maximum ozone concentrations, the differences in estimated future year ozone design values (DVF) stemming from differences in biogenic emissions estimates were on the order of 4 parts per billion (ppb), corresponding to approximately 5% of the daily maximum 8-hr ozone National Ambient Air Quality Standard (NAAQS) of 75 ppb. For PM2.5, the differences were 0.1–0.25 μg/m3 in the summer total organic mass component of DVFs, corresponding to approximately 1–2% of the value of the annual PM2.5 NAAQS of 15 μg/m3. Spatial variations in the ozone and PM2.5 differences also reveal that the impacts of different biogenic emission estimates on ozone and PM2.5 levels are dependent on ambient levels of anthropogenic emissions. PMID:21305893
Finance is good for the poor but it depends where you live
Rewilak, Johan
2013-01-01
I examine whether or not the incomes of the poor systematically grow with average incomes, and whether financial development enhances the incomes of the poorest quintile. Following the methodology of Dollar and Kraay (2002), I find, once extending Dollar and Kraay’s data, their findings are robust to the Lucas critique and economic growth is important for poverty reduction universally. However, in comparison to other authors’ work I show financial development aids the incomes of the poor in certain regions, whilst it may be detrimental in others. This proposes evidence against a “one size fits all” model adding a further contribution to the literature on financial development and poverty. PMID:23805027
Algebra for Enterprise Ontology: towards analysis and synthesis of enterprise models
NASA Astrophysics Data System (ADS)
Suga, Tetsuya; Iijima, Junichi
2018-03-01
Enterprise modeling methodologies have made enterprises more likely to be the object of systems engineering rather than craftsmanship. However, the current state of research in enterprise modeling methodologies lacks investigations of the mathematical background embedded in these methodologies. Abstract algebra, a broad subfield of mathematics, and the study of algebraic structures may provide interesting implications in both theory and practice. Therefore, this research gives an empirical challenge to establish an algebraic structure for one aspect model proposed in Design & Engineering Methodology for Organizations (DEMO), which is a major enterprise modeling methodology in the spotlight as a modeling principle to capture the skeleton of enterprises for developing enterprise information systems. The results show that the aspect model behaves well in the sense of algebraic operations and indeed constructs a Boolean algebra. This article also discusses comparisons with other modeling languages and suggests future work.
Mechatronics by Analogy and Application to Legged Locomotion
NASA Astrophysics Data System (ADS)
Ragusila, Victor
A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced and applied to designing a leg mechanism. The new methodology argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also studied. A series of simulations show that the dynamic behaviour of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently. The first stage of Mechatronics by Analogy is a method of extracting significant features of system dynamics through simpler models. The goal is to determine a set of simpler mechanisms with similar dynamic behaviour to that of the original system in various phases of its motion. A modular bond-graph representation of the system is determined, and subsequently simplified using two simplification algorithms. The first algorithm determines the relevant dynamic elements of the system for each phase of motion, and the second algorithm finds the simple mechanism described by the remaining dynamic elements. In addition to greatly simplifying the controller for the system, using simpler mechanisms with similar behaviour provides a greater insight into the dynamics of the system. This is seen in the second stage of the new methodology, which concurrently optimizes the simpler mechanisms together with a control system based on their dynamics. Once the optimal configuration of the simpler system is determined, the original mechanism is optimized such that its dynamic behaviour is analogous. It is shown that, if this analogy is achieved, the control system designed based on the simpler mechanisms can be directly implemented to the more complex system, and their dynamic behaviours are close enough for the system performance to be effectively the same. Finally it is shown that, for the employed objective of fast legged locomotion, the proposed methodology achieves a better design than Reduction-by-Feedback, a competing methodology that uses control layers to simplify the dynamics of the system.
Joint PET-MR respiratory motion models for clinical PET motion correction
NASA Astrophysics Data System (ADS)
Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David
2016-09-01
Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
Avadí, Angel; Fréon, Pierre; Tam, Jorge
2014-01-01
Sustainability assessment of food supply chains is relevant for global sustainable development. A framework is proposed for analysing fishfood (fish products for direct human consumption) supply chains with local or international scopes. It combines a material flow model (including an ecosystem dimension) of the supply chains, calculation of sustainability indicators (environmental, socio-economic, nutritional), and finally multi-criteria comparison of alternative supply chains (e.g. fates of landed fish) and future exploitation scenarios. The Peruvian anchoveta fishery is the starting point for various local and global supply chains, especially via reduction of anchoveta into fishmeal and oil, used worldwide as a key input in livestock and fish feeds. The Peruvian anchoveta supply chains are described, and the proposed methodology is used to model them. Three scenarios were explored: status quo of fish exploitation (Scenario 1), increase in anchoveta landings for food (Scenario 2), and radical decrease in total anchoveta landings to allow other fish stocks to prosper (Scenario 3). It was found that Scenario 2 provided the best balance of sustainability improvements among the three scenarios, but further refinement of the assessment is recommended. In the long term, the best opportunities for improving the environmental and socio-economic performance of Peruvian fisheries are related to sustainability-improving management and policy changes affecting the reduction industry. Our approach provides the tools and quantitative results to identify these best improvement opportunities.
Avadí, Angel; Fréon, Pierre; Tam, Jorge
2014-01-01
Sustainability assessment of food supply chains is relevant for global sustainable development. A framework is proposed for analysing fishfood (fish products for direct human consumption) supply chains with local or international scopes. It combines a material flow model (including an ecosystem dimension) of the supply chains, calculation of sustainability indicators (environmental, socio-economic, nutritional), and finally multi-criteria comparison of alternative supply chains (e.g. fates of landed fish) and future exploitation scenarios. The Peruvian anchoveta fishery is the starting point for various local and global supply chains, especially via reduction of anchoveta into fishmeal and oil, used worldwide as a key input in livestock and fish feeds. The Peruvian anchoveta supply chains are described, and the proposed methodology is used to model them. Three scenarios were explored: status quo of fish exploitation (Scenario 1), increase in anchoveta landings for food (Scenario 2), and radical decrease in total anchoveta landings to allow other fish stocks to prosper (Scenario 3). It was found that Scenario 2 provided the best balance of sustainability improvements among the three scenarios, but further refinement of the assessment is recommended. In the long term, the best opportunities for improving the environmental and socio-economic performance of Peruvian fisheries are related to sustainability-improving management and policy changes affecting the reduction industry. Our approach provides the tools and quantitative results to identify these best improvement opportunities. PMID:25003196
78 FR 65652 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
..., special studies, or methodological studies (see line 2 of Burden Table). Participation in NHANES is... examination. This information is designed to better understand sodium intake and provide a population baseline...
SRB ascent aerodynamic heating design criteria reduction study, volume 1
NASA Technical Reports Server (NTRS)
Crain, W. K.; Frost, C. L.; Engel, C. D.
1989-01-01
An independent set of solid rocket booster (SRB) convective ascent design environments were produced which would serve as a check on the Rockwell IVBC-3 environments used to design the ascent phase of flight. In addition, support was provided for lowering the design environments such that Thermal Protection System (TPS), based on conservative estimates, could be removed leading to a reduction in SRB refurbishment time and cost. Ascent convective heating rates and loads were generated at locations in the SRB where lowering the thermal environment would impact the TPS design. The ascent thermal environments are documented along with the wind tunnel/flight test data base used as well as the trajectory and environment generation methodology. Methodology, as well as, environment summaries compared to the 1980 Design and Rockwell IVBC-3 Design Environment are presented in this volume, 1.
Surface composition of Mercury from reflectance spectrophotometry
NASA Technical Reports Server (NTRS)
Vilas, Faith
1988-01-01
The controversies surrounding the existing spectra of Mercury are discussed together with the various implications for interpretations of Mercury's surface composition. Special attention is given to the basic procedure used for reducing reflectance spectrophotometry data, the factors that must be accounted for in the reduction of these data, and the methodology for defining the portion of the surface contributing the greatest amount of light to an individual spectrum. The application of these methodologies to Mercury's spectra is presented.
NASA Astrophysics Data System (ADS)
Tesfamichael, Aklilu A.; Caplan, Arthur J.; Kaluarachchi, Jagath J.
2005-05-01
This study provides an improved methodology for investigating the trade-offs between the health risks and economic benefits of using atrazine in the agricultural sector by incorporating public attitude to pesticide management in the analysis. Regression models are developed to predict finished water atrazine concentration in high-risk community water supplies in the United States. The predicted finished water atrazine concentrations are then used in a health risk assessment. The computed health risks are compared with the total economic surplus in the U.S. corn market for different atrazine application rates using estimated demand and supply functions developed in this work. Analysis of different scenarios with consumer price premiums for chemical-free and reduced-chemical corn indicate that if the society is willing to pay a price premium, risks can be reduced without a large reduction in the total economic surplus and net benefits may be higher. The results also show that this methodology provides an improved scientific framework for future decision making and policy evaluation in pesticide management.
Usability-driven evolution of a space instrument
NASA Astrophysics Data System (ADS)
McCalden, Alec
2012-09-01
The use of resources in the cradle-to-grave timeline of a space instrument might be significantly improved by considering the concept of usability from the start of the mission. The methodology proposed here includes giving early priority in a programme to the iterative development of a simulator that models instrument operation, and allowing this to evolve ahead of the actual instrument specification and fabrication. The advantages include reduction of risk in software development by shifting much of it to earlier in a programme than is typical, plus a test programme that uses and thereby proves the same support systems that may be used for flight. A new development flow for an instrument is suggested, showing how the system engineering phases used by the space agencies could be reworked in line with these ideas. This methodology is also likely to contribute to a better understanding between the various disciplines involved in the creation of a new instrument. The result should better capture the science needs, implement them more accurately with less wasted effort, and more fully allow the best ideas from all team members to be considered.
A method for estimating Dekkera/Brettanomyces populations in wines.
Benito, S; Palomero, F; Morata, A; Calderón, F; Suárez-Lepe, J A
2009-05-01
The formation of ethylphenols in wines, a consequence of Dekkera/Brettanomyces metabolism, can affect their quality. The main aims of this work were to further our knowledge of Dekkera/Brettanomyces with respect to ethylphenol production, and to develop a methodology for detecting this spoilage yeast and for estimating its population size in wines using differential-selective media and high performance liquid chromatography (HPLC). This work examines the reduction of p-coumaric acid and the formation of 4-vinylphenol and 4-ethylphenol (recorded by HPLC-DAD) in a prepared medium because of the activities of different yeast species and populations. A regression model was constructed for estimating the population of Dekkera/Brettanomyces at the beginning of fermentation via the conversion of hydroxycinnamic acids into ethylphenols. The proposed methodology allows the populations of Dekkera/Brettanomyces at the beginning of fermentation to be estimated in problem wines. Moreover, it avoids false positives because of yeasts resistant to the effects of the selective elements of the medium. This may help prevent the appearance of organoleptic anomalies in wines at the winery level.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
To publish or not to publish? On the aggregation and drivers of research performance
De Witte, Kristof
2010-01-01
This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573
Zhu, Zhi-Liang; Stackpoole, Sarah
2011-01-01
The Energy Independence and Security Act of 2007 (EISA) requires the U.S. Department of the Interior (DOI) to develop a methodology and conduct an assessment of carbon storage, carbon sequestration, and greenhouse-gas (GHG) fluxes in the Nation's ecosystems. The U.S. Geological Survey (USGS) has developed and published the methodology (U.S. Geological Survey Scientific Investigations Report 2010-5233) and has assembled an interdisciplinary team of scientists to conduct the assessment over the next three to four years, commencing in October 2010. The assessment will fulfill specific requirements of the EISA by (1) quantifying, measuring, and monitoring carbon sequestration and GHG fluxes using national datasets and science tools such as remote sensing, and biogeochemical and hydrological models, (2) evaluating a range of management and restoration activities for their effects on carbon-sequestration capacity and the reduction of GHG fluxes, and (3) assessing effects of climate change and other controlling processes (including wildland fires) on carbon uptake and GHG emissions from ecosystems.
A Monte Carlo analysis of breast screening randomized trials.
Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M
2016-12-01
To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Mackenzie, S G; Leinonen, I; Ferguson, N; Kyriazakis, I
2016-05-28
The objective of this study was to develop a novel methodology that enables pig diets to be formulated explicitly for environmental impact objectives using a Life Cycle Assessment (LCA) approach. To achieve this, the following methodological issues had to be addressed: (1) account for environmental impacts caused by both ingredient choice and nutrient excretion, (2) formulate diets for multiple environmental impact objectives and (3) allow flexibility to identify the optimal nutritional composition for each environmental impact objective. An LCA model based on Canadian pig farms was integrated into a diet formulation tool to compare the use of different ingredients in Eastern and Western Canada. By allowing the feed energy content to vary, it was possible to identify the optimum energy density for different environmental impact objectives, while accounting for the expected effect of energy density on feed intake. A least-cost diet was compared with diets formulated to minimise the following objectives: non-renewable resource use, acidification potential, eutrophication potential, global warming potential and a combined environmental impact score (using these four categories). The resulting environmental impacts were compared using parallel Monte Carlo simulations to account for shared uncertainty. When optimising diets to minimise a single environmental impact category, reductions in the said category were observed in all cases. However, this was at the expense of increasing the impact in other categories and higher dietary costs. The methodology can identify nutritional strategies to minimise environmental impacts, such as increasing the nutritional density of the diets, compared with the least-cost formulation.
Lu, Pei-Lin; Lai, Jui-Yang; Tabata, Yasuhiko; Hsiue, Ging-Ho
2008-07-01
In this study, a novel methodology based on the anterior chamber of rabbit eyes model was developed to evaluate the in vivo biocompatibility of biomaterials in an immune privileged site. The 7-mm-diameter membrane implants made from either a biological tissue material (amniotic membrane, AM group) or a biomedical polymeric material (gelatin, GM group) were inserted in rabbit anterior chamber for 36 months and characterized by biomicroscopic examinations, intraocular pressure measurements, and corneal thickness measurements. The noninvasive ophthalmic parameters were scored to provide a quantitative grading system. In this animal model, both AM and GM implants were visible in an ocular immune privileged site during clinical observations. The implants of the AM group appeared as soft tissue patches and have undergone a slow dissolution process resulting in a partial reduction of their size. Additionally, the AM implants did not induce any foreign body reaction or change in ocular tissue response for the studied period. By contrast, in the GM groups, significant corneal edema, elevated intraocular pressure, and increased corneal thickness were noted in the early postoperative phase (within 3 days), but resolved rapidly with in vivo dissolution of the gelatin. The results from the ocular grading system showed that both implants had good long-term biocompatibility in an ocular immune privileged site for up to 3 years. It is concluded that the anterior chamber of rabbit eyes model is an efficient method for noninvasively determining the immune privileged tissue/biomaterial interactions. (c) 2007 Wiley Periodicals, Inc.
Investigation of Error Patterns in Geographical Databases
NASA Technical Reports Server (NTRS)
Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)
2002-01-01
The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.
Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar
2014-11-01
Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K
2017-05-01
Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to high. However, given the lack of dietary simulation reporting guidelines, future work could refine the quality tool to harmonize consistency in the reporting of subsequent dietary modeling studies. © 2017 American Society for Nutrition.
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
NASA Astrophysics Data System (ADS)
Kanta, L.; Berglund, E. Z.
2015-12-01
Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.
Odor modeling methodology for determining the odor buffer distance for sanitary landfills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Dukman.
1991-01-01
The objective of this study is to create a methodology whereby reductions in off-site odor migrations resulting from operational and design changes in new or expanded sanitary landfills can be evaluated. The Ann Arbor Sanitary Landfill was chosen as a prototype landfill to test a hypothesis for this study. This study is a unique approach to odor prediction at sanitary landfills using surface flux measurements, odor threshold panel measurements, and dispersion modeling. Flux measurements were made at open tipping face, temporary cover, final cover, vents, and composting zones of the Ann Arbor Sanitary Landfill. Surface gas velocities and in-ground concentrationsmore » were determined to allow a quantification of the total and methane gas flow rate. Odor threshold panel measurements were performed to determine the odor intensity in odor units at the corresponding sites. The used the flux and odor panel measurements in the Industrial Source Complex Terrain Model to determine the hourly averaged highest and second highest odor levels at 175 receptors placed at the property boundary and 25 nearby residential locations. Using measured values for velocity, subsurface CH{sub 4} concentration and odor intensity, it was determined that the proposed 1990 operations with a buffer distance of 600 feet provided at least a factor of five protection below 1 o.u. of the odor threshold for all receptors, and dilution protection equal to the historic 1984 operations with a 1,200 feet isolation distance.« less
Overview of Sustainability Studies of CNC Machining and LAM of Stainless Steel
NASA Astrophysics Data System (ADS)
Nyamekye, Patricia; Leino, Maija; Piili, Heidi; Salminen, Antti
Laser additive manufacturing (LAM), known also as 3D printing, is a powder bed fusion (PBF) type of additive manufacturing (AM) technology used to fabricate metal parts out of metal powder. The development of the technology from building prototype parts to functional parts has increased remarkably in 2000s. LAM of metals is promising technology that offers new opportunities to manufacturing and to resource efficiency. However, there is only few published articles about its sustainability. Aim in this study was to create supply chain model of LAM and CNC machining and create a methodology to carry out a life cycle inventory (LCI) data collection for these techniques. The methodology of the study was literature review and scenario modeling. The acquisition of raw material, production phase and transportations were used as basis of comparison. The modelled scenarios were fictitious and created for industries, like aviation and healthcare that often require swift delivery as well as customized parts. The results of this study showed that the use of LAM offers a possibility to reduce downtime in supply chains of spare parts and reduce part inventory more effectively than CNC machining. Also the gap between customers and business is possible to be shortened with LAM thus offering a possibility to reduce emissions due to less transportation. The results also indicated weight reduction possibility with LAM due to optimized part geometry which allow lesser amount of metallic powder to be used in making parts.
NASA Astrophysics Data System (ADS)
Banfi, F.
2017-08-01
Architecture, Engineering and Construction (AEC) industry is facing a great process re-engineering of the management procedures for new constructions, and recent studies show a significant increase of the benefits obtained through the use of Building Information Modelling (BIM) methodologies. This innovative approach needs new developments for information and communication technologies (ICT) in order to improve cooperation and interoperability among different actors and scientific disciplines. Accordingly, BIM could be described as a new tool capable of collect/analyse a great quantity of information (Big data) and improve the management of building during its life of cycle (LC). The main aim of this research is, in addition to a reduction in production times, reduce physical and financial resources (economic impact), to demonstrate how technology development can support a complex generative process with new digital tools (modelling impact). This paper reviews recent BIMs of different historical Italian buildings such as Basilica of Collemaggio in L'Aquila, Masegra Castle in Sondrio, Basilica of Saint Ambrose in Milan and Visconti Bridge in Lecco and carries out a methodological analysis to optimize output information and results combining different data and modelling techniques into a single hub (cloud service) through the use of new Grade of Generation (GoG) and Information (GoI) (management impact). Finally, this study shows the need to orient GoG and GoI for a different type of analysis, which requires a high Grade of Accuracy (GoA) and an Automatic Verification System (AVS ) at the same time.
Comparing estimates of EMEP MSC-W and UFORE models in air pollutant reduction by urban trees.
Guidolotti, Gabriele; Salviato, Michele; Calfapietra, Carlo
2016-10-01
There is a growing interest to identify and quantify the benefits provided by the presence of trees in urban environment in order to improve the environmental quality in cities. However, the evaluation and estimate of plant efficiency in removing atmospheric pollutants is rather complicated, because of the high number of factors involved and the difficulty of estimating the effect of the interactions between the different components. In this study, the EMEP MSC-W model was implemented to scale-down to tree-level and allows its application to an industrial-urban green area in Northern Italy. Moreover, the annual outputs were compared with the outputs of UFORE (nowadays i-Tree), a leading model for urban forest applications. Although, EMEP/MSC-W model and UFORE are semi-empirical models designed for different applications, the comparison, based on O3, NO2 and PM10 removal, showed a good agreement in the estimates and highlights how the down-scaling methodology presented in this study may have significant opportunities for further developments.
Impact of future warming on winter chilling in Australia.
Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E W R
2013-05-01
Increases in temperature as a result of anthropogenically generated greenhouse gas (GHG) emissions are likely to impact key aspects of horticultural production. The potential effect of higher temperatures on fruit and nut trees' ability to break winter dormancy, which requires exposure to winter chilling temperatures, was considered. Three chill models (the 0-7.2°C, Modified Utah, and Dynamic models) were used to investigate changes in chill accumulation at 13 sites across Australia according to localised temperature change related to 1, 2 and 3°C increases in global average temperatures. This methodology avoids reliance on outcomes of future GHG emission pathways, which vary and are likely to change. Regional impacts and rates of decline in chilling differ among the chill models, with the 0-7.2°C model indicating the greatest reduction and the Dynamic model the slowest rate of decline. Elevated and high latitude eastern Australian sites were the least affected while the three more maritime, less elevated Western Australian locations were shown to bear the greatest impact from future warming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.
Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less
Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.
2017-10-24
Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less
NASA Astrophysics Data System (ADS)
Ganot, Yonatan; Holtzman, Ran; Weisbrod, Noam; Nitzan, Ido; Katz, Yoram; Kurtzman, Daniel
2017-09-01
We study the relation between surface infiltration and groundwater recharge during managed aquifer recharge (MAR) with desalinated seawater in an infiltration pond, at the Menashe site that overlies the northern part of the Israeli Coastal Aquifer. We monitor infiltration dynamics at multiple scales (up to the scale of the entire pond) by measuring the ponding depth, sediment water content and groundwater levels, using pressure sensors, single-ring infiltrometers, soil sensors, and observation wells. During a month (January 2015) of continuous intensive MAR (2.45 × 106 m3 discharged to a 10.7 ha area), groundwater level has risen by 17 m attaining full connection with the pond, while average infiltration rates declined by almost 2 orders of magnitude (from ˜ 11 to ˜ 0.4 m d-1). This reduction can be explained solely by the lithology of the unsaturated zone that includes relatively low-permeability sediments. Clogging processes at the pond-surface - abundant in many MAR operations - are negated by the high-quality desalinated seawater (turbidity ˜ 0.2 NTU, total dissolved solids ˜ 120 mg L-1) or negligible compared to the low-permeability layers. Recharge during infiltration was estimated reasonably well by simple analytical models, whereas a numerical model was used for estimating groundwater recharge after the end of infiltration. It was found that a calibrated numerical model with a one-dimensional representative sediment profile is able to capture MAR dynamics, including temporal reduction of infiltration rates, drainage and groundwater recharge. Measured infiltration rates of an independent MAR event (January 2016) fitted well to those calculated by the calibrated numerical model, showing the model validity. The successful quantification methodologies of the temporal groundwater recharge are useful for MAR practitioners and can serve as an input for groundwater flow models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pyle, Moira L.; Walter, William R.; Pasyanos, Michael E.
2017-10-24
Here, we develop high–resolution, laterally varying attenuation models for the regional crustal phases of Pg and Lg in the area surrounding the Basin and Range Province in the western United States. The models are part of the characterization effort for the Source Physics Experiment (SPE), a series of chemical explosions at the Nevada National Security Site designed to improve our understanding of explosion source phenomenology. To aid in SPE modeling efforts, we focus on improving our ability to accurately predict amplitudes in a set of narrow frequency bands ranging from 0.5 to 16.0 Hz. To explore constraints at higher frequenciesmore » where data become more sparse, we test the robustness of the empirically observed power–law relationship between quality factor Q and frequency (Q=Q 0f γ). Our methodology uses a staged approach to consider attenuation, physics–based source terms, site terms, and geometrical spreading contributions to amplitude measurements. Tomographic inversion results indicate that the frequency dependence is a reasonable assumption as attenuation varies laterally for this region through all frequency bands considered. Our 2D Pg and Lg attenuation models correlate with underlying physiographic provinces, with the highest Q located in the Sierra Nevada Mountains and the Colorado plateau. Compared to a best–fitting 1D model for the region, the 2D model provides an 81% variance reduction overall for Lg residuals and a 75% reduction for Pg. These detailed attenuation maps at high frequencies will facilitate further study of local and regional distance P/S amplitude discriminants that are typically used to distinguish between earthquakes and underground explosions.« less
Roibás, Laura; Loiseau, Eléonore; Hospido, Almudena
2018-07-01
On a previous study, the carbon footprint (CF) of all production and consumption activities of Galicia, an Autonomous Community located in the north-west of Spain, was determined and the results were used to devise strategies aimed at the reduction and mitigation of the greenhouse gas (GHG) emissions. The territorial LCA methodology was used there to perform the calculations. However, that methodology was initially designed to compute the emissions of all types of polluting substances to the environment (several thousands of substances considered in the life cycle inventories), aimed at performing complete LCA studies. This requirement implies the use of specific modelling approaches and databases that in turn raised some difficulties, i.e., need of large amounts of data (which increased gathering times), low temporal, geographical and technological representativeness of the study, lack of data, and presence of double counting issues when trying to combine the sectorial CF results into those of the total economy. In view of these of difficulties, and considering the need to focus only on GHG emissions, it seems important to improve the robustness of the CF computation while proposing a simplified methodology. This study is the result of those efforts to improve the aforementioned methodology. In addition to the territorial LCA approach, several Input-Output (IO) based alternatives have been used here to compute direct and indirect GHG emissions of all Galician production and consumption activities. The results of the different alternatives were compared and evaluated under a multi-criteria approach considering reliability, completeness, temporal and geographical correlation, applicability and consistency. Based on that, an improved and simplified methodology was proposed to determine the CF of the Galician consumption and production activities from a total responsibility perspective. This methodology adequately reflects the current characteristics of the Galician economy, thus increasing the representativeness of the results, and can be applied to any region in which IO tables and environmental vectors are available. This methodology could thus provide useful information in decision making processes to reduce and prevent GHG emissions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
JEDI Methodology | Jobs and Economic Development Impact Models | NREL
Methodology JEDI Methodology The intent of the Jobs and Economic Development Impact (JEDI) models costs) to demonstrate the employment and economic impacts that will likely result during the estimate of overall economic impacts from specific scenarios. Please see Limitations of JEDI Models for
Climate Change in Small Islands
NASA Astrophysics Data System (ADS)
Tomé, Ricardo; Miranda, Pedro M. A.; Brito de Azevedo, Eduardo; Teixeira, Miguel A. C.
2014-05-01
Isolated islands are especially vulnerable to climate change. But their climate is generally not well reproduced in GCMs, due to their small size and complex topography. Here, results from a new generation of climate models, forced by scenarios RCP8.5 and RCP4.5 of greenhouse gases and atmospheric aerosol concentrations, established by the IPCC for its fifth report, are used to characterize the climate of the islands of Azores and Madeira, and its response to the ongoing global warming. The methodology developed here uses the new global model EC-Earth, data from ERA-Interim reanalysis and results from an extensive set of simulations with the WRF research model, using, for the first time, a dynamic approach for the regionalization of global fields at sufficiently fine resolutions, in which the effect of topographical complexity is explicitly represented. The results reviewed here suggest increases in temperature above 1C in the middle of the XXI century in Azores and Madeira, reaching values higher than 2.5C at the end of the century, accompanied by a reduction in the annual rainfall of around 10% in the Azores, which could reach 30% in Madeira. These changes are large enough to justify much broader impacts on island ecosystems and the human population. The results show the advantage of using the proposed methodology, in particular for an adequate representation of the precipitation regime in islands with complex topography, even suggesting the need for higher resolutions in future work. The WRF results are also compared against two different downscaling techniques using an air mass transformation model and a modified version of the upslope precipitation model of Smith and Barstad (2005).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Curtis D.; Zhang, Xuesong; Reddy, Ashwan D.
Agricultural residues are important sources of feedstock for a cellulosic biofuels industry that is being developed to reduce greenhouse gas emissions and improve energy independence. While the US Midwest has been recognized as key to providing maize stover for meeting near-term cellulosic biofuel production goals, there is uncertainty that such feedstocks can produce biofuels that meet federal cellulosic standards. Here, we conducted extensive site-level calibration of the Environmental Policy Integrated Climate (EPIC) terrestrial ecosystems model and applied the model at high spatial resolution across the US Midwest to improve estimates of the maximum production potential and greenhouse gas emissions expectedmore » from continuous maize residue-derived biofuels. A comparison of methodologies for calculating the soil carbon impacts of residue harvesting demonstrates the large impact of study duration, depth of soil considered, and inclusion of litter carbon in soil carbon change calculations on the estimated greenhouse gas intensity of maize stover-derived biofuels. Using the most representative methodology for assessing long-term residue harvesting impacts, we estimate that only 5.3 billion liters per year (bly) of ethanol, or 8.7% of the near-term US cellulosic biofuel demand, could be met under common no-till farming practices. However, appreciably more feedstock becomes available at modestly higher emissions levels, with potential for 89.0 bly of ethanol production meeting US advanced biofuel standards. Adjustments to management practices, such as adding cover crops to no-till management, will be required to produce sufficient quantities of residue meeting the greenhouse gas emission reduction standard for cellulosic biofuels. Considering the rapid increase in residue availability with modest relaxations in GHG reduction level, it is expected that management practices with modest benefits to soil carbon would allow considerable expansion of potential cellulosic biofuel production.« less
Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J
2015-03-15
This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fryanov, V. N.; Pavlova, L. D.; Temlyantsev, M. V.
2017-09-01
Methodological approaches to theoretical substantiation of the structure and parameters of robotic coal mines are outlined. The results of mathematical and numerical modeling revealed the features of manifestation of geomechanical and gas dynamic processes in the conditions of robotic mines. Technological solutions for the design and manufacture of technical means for robotic mine are adopted using the method of economic and mathematical modeling and in accordance with the current regulatory documents. For a comparative performance evaluation of technological schemes of traditional and robotic mines, methods of cognitive modeling and matrix search for subsystem elements in the synthesis of a complex geotechnological system are applied. It is substantiated that the process of technical re-equipment of a traditional mine with a phased transition to a robotic mine will reduce unit costs by almost 1.5 times with a significant social effect due to a reduction in the number of personnel engaged in hazardous work.
Self-Organizing Hidden Markov Model Map (SOHMMM).
Ferles, Christos; Stafylopatis, Andreas
2013-12-01
A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Harinipriya, S; Sangaranarayanan, M V
2006-01-31
The evaluation of the free energy of activation pertaining to the electron-transfer reactions occurring at liquid/liquid interfaces is carried out employing a diffuse boundary model. The interfacial solvation numbers are estimated using a lattice gas model under the quasichemical approximation. The standard reduction potentials of the redox couples, appropriate inner potential differences, dielectric permittivities, as well as the width of the interface are included in the analysis. The methodology is applied to the reaction between [Fe(CN)6](3-/4-) and [Lu(biphthalocyanine)](3+/4+) at water/1,2-dichloroethane interface. The rate-determining step is inferred from the estimated free energy of activation for the constituent processes. The results indicate that the solvent shielding effect and the desolvation of the reactants at the interface play a central role in dictating the free energy of activation. The heterogeneous electron-transfer rate constant is evaluated from the molar reaction volume and the frequency factor.
Anti-Obesity Agents and the US Food and Drug Administration.
Casey, Martin F; Mechanick, Jeffrey I
2014-09-01
Despite the growing market for obesity care, the US Food and Drug Administration (FDA) has approved only two new pharmaceutical agents-lorcaserin and combination phentermine/topiramate-for weight reduction since 2000, while removing three agents from the market in the same time period. This article explores the FDA's history and role in the approval of anti-obesity medications within the context of a public health model of obesity. Through the review of obesity literature and FDA approval documents, we identified two major barriers preventing fair evaluation of anti-obesity agents including: (1) methodological pitfalls in clinical trials and (2) misaligned values in the assessment of anti-obesity agents. Specific recommendations include the use of adaptive (Bayesian) design protocols, value-based analyses of risks and benefits, and regulatory guidance based on a comprehensive, multi-platform obesity disease model. Positively addressing barriers in the FDA approval process of anti-obesity agents may have many beneficial effects within an obesity disease model.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
Jami, Mohammed S; Rosli, Nurul-Shafiqah; Amosa, Mutiu K
2016-06-01
Availability of quality-certified water is pertinent to the production of food and pharmaceutical products. Adverse effects of manganese content of water on the corrosion of vessels and reactors necessitate that process water is scrutinized for allowable concentration levels before being applied in the production processes. In this research, optimization of the adsorption process conditions germane to the removal of manganese from biotreated palm oil mill effluent (BPOME) using zeolite 3A subsequent to a comparative adsorption with clinoptilolite was studied. A face-centered central composite design (FCCCD) of the response surface methodology (RSM) was adopted for the study. Analysis of variance (ANOVA) for response surface quadratic model revealed that the model was significant with dosage and agitation speed connoting the main significant process factors for the optimization. R(2) of 0.9478 yielded by the model was in agreement with predicted R(2). Langmuir and pseudo-second-order suggest the adsorption mechanism involved monolayer adsorption and cation exchanging.
Syntheses and Functionalizations of Porphyrin Macrocycles
Vicente, Maria da G.H.; Smith, Kevin M.
2014-01-01
Porphyrin macrocycles have been the subject of intense study in the last century because they are widely distributed in nature, usually as metal complexes of either iron or magnesium. As such, they serve as the prosthetic groups in a wide variety of primary metabolites, such as hemoglobins, myoglobins, cytochromes, catalases, peroxidases, chlorophylls, and bacteriochlorophylls; these compounds have multiple applications in materials science, biology and medicine. This article describes current methodology for preparation of simple, symmetrical model porphyrins, as well as more complex protocols for preparation of unsymmetrically substituted porphyrin macrocycles similar to those found in nature. The basic chemical reactivity of porphyrins and metalloporphyrin is also described, including electrophilic and nucleophilic reactions, oxidations, reductions, and metal-mediated cross-coupling reactions. Using the synthetic approaches and reactivity profiles presented, eventually almost any substituted porphyrin system can be prepared for applications in a variety of areas, including in catalysis, electron transport, model biological systems and therapeutics. PMID:25484638
Expert systems for automated maintenance of a Mars oxygen production system
NASA Astrophysics Data System (ADS)
Huang, Jen-Kuang; Ho, Ming-Tsang; Ash, Robert L.
1992-08-01
Application of expert system concepts to a breadboard Mars oxygen processor unit have been studied and tested. The research was directed toward developing the methodology required to enable autonomous operation and control of these simple chemical processors at Mars. Failure detection and isolation was the key area of concern, and schemes using forward chaining, backward chaining, knowledge-based expert systems, and rule-based expert systems were examined. Tests and simulations were conducted that investigated self-health checkout, emergency shutdown, and fault detection, in addition to normal control activities. A dynamic system model was developed using the Bond-Graph technique. The dynamic model agreed well with tests involving sudden reductions in throughput. However, nonlinear effects were observed during tests that incorporated step function increases in flow variables. Computer simulations and experiments have demonstrated the feasibility of expert systems utilizing rule-based diagnosis and decision-making algorithms.
Examining social, physical, and environmental dimensions of tornado vulnerability in Texas.
Siebeneck, Laura
2016-01-01
To develop a vulnerability model that captures the social, physical, and environmental dimensions of tornado vulnerability of Texas counties. Guided by previous research and methodologies proposed in the hazards and emergency management literature, a principle components analysis is used to create a tornado vulnerability index. Data were gathered from open source information available through the US Census Bureau, American Community Surveys, and the Texas Natural Resources Information System. Texas counties. The results of the model yielded three indices that highlight geographic variability of social vulnerability, built environment vulnerability, and tornado hazard throughout Texas. Further analyses suggest that counties with the highest tornado vulnerability include those with high population densities and high tornado risk. This article demonstrates one method for assessing statewide tornado vulnerability and presents how the results of this type of analysis can be applied by emergency managers towards the reduction of tornado vulnerability in their communities.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2013-01-01
A dual flow-path inlet system is being tested to evaluate methodologies for a Turbine Based Combined Cycle (TBCC) propulsion system to perform a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the closed loop control system, which utilizes a shock location sensor to improve inlet performance and operability. Even though the shock location feedback has a coarse resolution, the feedback allows for a reduction in steady state error and, in some cases, better performance than with previous proposed pressure ratio based methods. This paper demonstrates the design and benefit with the implementation of a proportional-integral controller, an H-Infinity based controller, and a disturbance observer based controller.
Vaccine effects on heterogeneity in susceptibility and implications for population health management
Langwig, Kate E.; Wargo, Andrew R.; Jones, Darbi R.; Viss, Jessie R.; Rutan, Barbara J.; Egan, Nicholas A.; Sá-Guimarães, Pedro; Min Sun Kim,; Kurath, Gael; Gomes, M. Gabriela M.; Lipsitch, Marc; Bansal, Shweta; Pettigrew, Melinda M.
2017-01-01
Heterogeneity in host susceptibility is a key determinant of infectious disease dynamics but is rarely accounted for in assessment of disease control measures. Understanding how susceptibility is distributed in populations, and how control measures change this distribution, is integral to predicting the course of epidemics with and without interventions. Using multiple experimental and modeling approaches, we show that rainbow trout have relatively homogeneous susceptibility to infection with infectious hematopoietic necrosis virus and that vaccination increases heterogeneity in susceptibility in a nearly all-or-nothing fashion. In a simple transmission model with an R0 of 2, the highly heterogeneous vaccine protection would cause a 35 percentage-point reduction in outbreak size over an intervention inducing homogenous protection at the same mean level. More broadly, these findings provide validation of methodology that can help to reduce biases in predictions of vaccine impact in natural settings and provide insight into how vaccination shapes population susceptibility.
NASAL-Geom, a free upper respiratory tract 3D model reconstruction software
NASA Astrophysics Data System (ADS)
Cercos-Pita, J. L.; Cal, I. R.; Duque, D.; de Moreta, G. Sanjuán
2018-02-01
The tool NASAL-Geom, a free upper respiratory tract 3D model reconstruction software, is here described. As a free software, researchers and professionals are welcome to obtain, analyze, improve and redistribute it, potentially increasing the rate of development, and reducing at the same time ethical conflicts regarding medical applications which cannot be analyzed. Additionally, the tool has been optimized for the specific task of reading upper respiratory tract Computerized Tomography scans, and producing 3D geometries. The reconstruction process is divided into three stages: preprocessing (including Metal Artifact Reduction, noise removal, and feature enhancement), segmentation (where the nasal cavity is identified), and 3D geometry reconstruction. The tool has been automatized (i.e. no human intervention is required) a critical feature to avoid bias in the reconstructed geometries. The applied methodology is discussed, as well as the program robustness and precision.
Leach, A W; Mumford, J D
2008-01-01
The Pesticide Environmental Accounting (PEA) tool provides a monetary estimate of environmental and health impacts per hectare-application for any pesticide. The model combines the Environmental Impact Quotient method and a methodology for absolute estimates of external pesticide costs in UK, USA and Germany. For many countries resources are not available for intensive assessments of external pesticide costs. The model converts external costs of a pesticide in the UK, USA and Germany to Mediterranean countries. Economic and policy applications include estimating impacts of pesticide reduction policies or benefits from technologies replacing pesticides, such as sterile insect technique. The system integrates disparate data and approaches into a single logical method. The assumptions in the system provide transparency and consistency but at the cost of some specificity and precision, a reasonable trade-off for a method that provides both comparative estimates of pesticide impacts and area-based assessments of absolute impacts.
Flow Quality Survey of the NASA Ames 11-by 11-Ft Transonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Amaya, Max A.
2011-01-01
New baseline turbulence levels have been measured using a new CTA and new hot-wire sensors. Levels remain the same as measured in 1999. Data and methodology documented (almost). New baseline acoustics levels have been measured up to Mach 1.35. -Levels are higher than reported in 1999. -Data and methodology documented (almost). Application of fairings to the strut trailing edge showed up to a 10% reduction in the tunnel background noise. Data analysis and documentation for publishing is ongoing.
NASA Astrophysics Data System (ADS)
Sáez, V.; González-García, J.; Marken, F.
2010-01-01
A new methodology for the sonoelectro-deposition and stripping of highly reactive iron at boron-doped diamond electrodes has been studied. In aqueous 1 M NH4F iron metal readily and reversibly electro-deposits onto boron-doped diamond electrodes. The effects of deposition potential, FeF63- concentration, deposition time, and mass transport are investigated and also the influence of power ultrasound (24 kHz, 8 Wcm-2). Scanning electron microscopy images of iron nanoparticles grown to typically 20-30 nm diameters are obtained. It is shown that a strongly and permanently adhering film of iron at boron-doped diamond can be formed and transferred into other solution environments. The catalytic reactivity of iron deposits at boron-doped diamond is investigated for the reductive dehalogenation of chloroacetate. The kinetically limited multi-electron reduction of trichloroacetate is dependent on the FeF63- deposition conditions and the solution composition. It is demonstrated that a stepwise iron-catalysed dechlorination via dichloroacetate and monochloroacetate to acetate is feasible. This sonoelectrochemical methodology offers a novel, clean and very versatile electro-dehalogenation methodology. The role of fluoride in the surface electrochemistry of iron deserves further attention.
Morère, Jacobo; Royuela, Sergio; Asensio, Guillermo; Palomino, Pablo; Enciso, Eduardo; Pando, Concepción; Cabañas, Albertina
2015-12-28
The deposition of Ni nanoparticles into porous supports is very important in catalysis. In this paper, we explore the use of supercritical CO(2) (scCO(2)) as a green solvent to deposit Ni nanoparticles on mesoporous SiO2 SBA-15 and a carbon xerogel. The good transport properties of scCO(2) allowed the efficient penetration of metal precursors dissolved in scCO(2) within the pores of the support without damaging its structure. Nickel hexafluoroacetylacetonate hydrate, nickel acetylacetonate, bis(cyclopentadienyl)nickel, Ni(NO(3))2⋅6H(2)O and NiCl(2)⋅6H(2)O were tried as precursors. Different methodologies were used: impregnation in scCO(2) and reduction in H(2)/N(2) at 400°C and low pressure, reactive deposition using H(2) at 200-250°C in scCO(2) and reactive deposition using ethanol at 150-200°C in scCO(2). The effect of precursor and methodology on the nickel particle size and the material homogeneity (on the different substrates) was analysed. This technology offers many opportunities in the preparation of metal-nanostructured materials. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Albrecht, Kevin J.
Decarbonization of the electric grid is fundamentally limited by the intermittency of renewable resources such as wind and solar. Therefore, energy storage will play a significant role in the future of grid-scale energy generation to overcome the intermittency issues. For this reason, concentrating solar power (CSP) plants have been a renewable energy generation technology of interest due to their ability to participate in cost effective and efficient thermal energy storage. However, the ability to dynamically dispatch a CSP plant to meet energy demands is currently limited by the large quantities of sensible thermal energy storage material needed in a molten salt plant. Perovskite oxides have been suggested as a thermochemical energy storage material to enhance the energy storage capabilities of particle-based CSP plants, which combine sensible and chemical modes of energy storage. In this dissertation, computational models are used to establish the thermochemical energy storage potential of select perovskite compositions, identify system configurations that promote high values of energy storage and solar-to-electric efficiency, assess the kinetic and transport limitation of the chemical mode of energy storage, and create receiver and reoxidation reactor models capable of aiding in component design. A methodology for determining perovskite thermochemical energy storage potential is developed based on point defect models to represent perovskite non-stoichiometry as a function of temperature and gas phase oxygen partial pressure. The thermodynamic parameters necessary for the model are extracted from non-stoichiometry measurements by fitting the model using an optimization routine. The procedure is demonstrated for Ca0.9Sr0.1MnO 3-d which displayed combined energy storage values of 705.7 kJ/kg -1 by cycling between 773 K and 0.21 bar oxygen to 1173 K and 10 -4 bar oxygen. Thermodynamic system-level models capable of exploiting perovskite redox chemistry for energy storage in CSP plants are presented. Comparisons of sweep gas and vacuum pumping reduction as well as hot storage conditions indicate that solar-to-electric efficiencies are higher for sweep gas reduction system at equivalent values of energy storage if the energy parasitics of commercially available devices are considered. However, if vacuum pump efficiency between 15% and 30% can be achieved, the reduction methods will be approximately equal. Reducing condition oxygen partial pressures below 10-3 bar for sweep gas reduction and 10-2 bar for vacuum pumping reduction result in large electrical parasitics, which significantly reduce solar-to-electric efficiency. A model based interpretation of experimental measurements made for perovskite redox cycling using sweep gas in a packed bed is presented. The model indicates that long reduction times for equilibrating perovskites with low oxygen partial pressure sweep gas, compared to reoxidation, are primarily due to the oxygen carrying capacity of high purity sweep gas and not surface kinetic limitations. Therefore, achieving rapid reduction in the limited receiver residence time will be controlled by the quantity of sweep gas introduced. Effective kinetic parameters considering surface reaction and radial particle diffusion are fit to the experimental data. Variable order rate expressions without significant particle radial diffusion limitations are shown to be capable of representing the reduction and oxidation data. Modeling of a particle reduction receiver using continuous flow of perovskite solid and sweep gas in counter-flow configuration has identified issues with managing the oxygen evolved by the solid as well as sweep gas flow rates. Introducing sweep gas quantities necessary for equilibrating the solid with oxygen partial pressures below 10-2 are shown to result in gas phase velocities above the entrainment velocity of 500 um particles. Receiver designs with considerations for gas management are investigated and the results indicate that degrees of reduction corresponding to only oxygen partial pressures of 10-2 bar are attained. Numerical investigation into perovskite thermochemical energy storage indicates that achieving high levels of reduction through sweep gas or vacuum pumping to lower gas phase oxygen partial pressure below 10-2 bar display issues with parasitic energy consumption and gas phase management. Therefore, focus on material development should place a premium on thermal reduction and reduction by shifting oxygen partial pressure between ambient and 10-2 bar. Such a material would enable the development of a system with high solar-to-electric efficiencies and degrees of reduction which are attainable in realistic component geometries.
The Defense Threat Reduction Agency's Technical Nuclear Forensics Research and Development Program
NASA Astrophysics Data System (ADS)
Franks, J.
2015-12-01
The Defense Threat Reduction Agency (DTRA) Technical Nuclear Forensics (TNF) Research and Development (R&D) Program's overarching goal is to design, develop, demonstrate, and transition advanced technologies and methodologies that improve the interagency operational capability to provide forensics conclusions after the detonation of a nuclear device. This goal is attained through the execution of three focus areas covering the span of the TNF process to enable strategic decision-making (attribution): Nuclear Forensic Materials Exploitation - Development of targeted technologies, methodologies and tools enabling the timely collection, analysis and interpretation of detonation materials.Prompt Nuclear Effects Exploitation - Improve ground-based capabilities to collect prompt nuclear device outputs and effects data for rapid, complementary and corroborative information.Nuclear Forensics Device Characterization - Development of a validated and verified capability to reverse model a nuclear device with high confidence from observables (e.g., prompt diagnostics, sample analysis, etc.) seen after an attack. This presentation will outline DTRA's TNF R&D strategy and current investments, with efforts focusing on: (1) introducing new technical data collection capabilities (e.g., ground-based prompt diagnostics sensor systems; innovative debris collection and analysis); (2) developing new TNF process paradigms and concepts of operations to decrease timelines and uncertainties, and increase results confidence; (3) enhanced validation and verification (V&V) of capabilities through technology evaluations and demonstrations; and (4) updated weapon output predictions to account for the modern threat environment. A key challenge to expanding these efforts to a global capability is the need for increased post-detonation TNF international cooperation, collaboration and peer reviews.
Novel model of stator design to reduce the mass of superconducting generators
NASA Astrophysics Data System (ADS)
Kails, Kevin; Li, Quan; Mueller, Markus
2018-05-01
High temperature superconductors (HTS), with much higher current density than conventional copper wires, make it feasible to develop very powerful and compact power generators. Thus, they are considered as one promising solution for large (10 + MW) direct-drive offshore wind turbines due to their low tower head mass. However, most HTS generator designs are based on a radial topology, which requires an excessive amount of HTS material and suffers from cooling and reliability issues. Axial flux machines on the other hand offer higher torque/volume ratios than the radial machines, which makes them an attractive option where space and transportation becomes an issue. However, their disadvantage is heavy structural mass. In this paper a novel stator design is introduced for HTS axial flux machines which enables a reduction in their structural mass. The stator is for the first time designed with a 45° angle that deviates the air gap closing forces into the vertical direction reducing the axial forces. The reduced axial forces improve the structural stability and consequently simplify their structural design. The novel methodology was then validated through an existing design of the HTS axial flux machine achieving a ∼10% mass reduction from 126 tonnes down to 115 tonnes. In addition, the air gap flux density increases due to the new claw pole shapes improving its power density from 53.19 to 61.90 W kg‑1. It is expected that the HTS axial flux machines designed with the new methodology offer a competitive advantage over other proposed superconducting generator designs in terms of cost, reliability and power density.
A methodology for investigating new nonprecious metal catalysts for PEM fuel cells.
Susac, D; Sode, A; Zhu, L; Wong, P C; Teo, M; Bizzotto, D; Mitchell, K A R; Parsons, R R; Campbell, S A
2006-06-08
This paper reports an approach to investigate metal-chalcogen materials as catalysts for the oxygen reduction reaction (ORR) in proton exchange membrane (PEM) fuel cells. The methodology is illustrated with reference to Co-Se thin films prepared by magnetron sputtering onto a glassy-carbon substrate. Scanning Auger microscopy (SAM), X-ray photoelectron spectroscopy (XPS), energy-dispersive X-ray spectroscopy (EDX), and X-ray diffraction (XRD) have been used, in parallel with electrochemical activity and stability measurements, to assess how the electrochemical performance relates to chemical composition. It is shown that Co-Se thin films with varying Se are active for oxygen reduction, although the open circuit potential (OCP) is lower than for Pt. A kinetically controlled process is observed in the potential range 0.5-0.7 V (vs reversible hydrogen electrode) for the thin-film catalysts studied. An initial exposure of the thin-film samples to an acid environment served as a pretreatment, which modified surface composition prior to activity measurements with the rotating disk electrode (RDE) method. Based on the SAM characterization before and after electrochemical tests, all surfaces demonstrating activity are dominated by chalcogen. XRD shows that the thin films have nanocrystalline character that is based on a Co(1-x)Se phase. Parallel studies on Co-Se powder supported on XC72R carbon show comparable OCP, Tafel region, and structural phase as for the thin-film model catalysts. A comparison for ORR activity has also been made between this Co-Se powder and a commercial Pt catalyst.
Shifting from Stewardship to Analytics of Massive Science Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Doyle, R.; Law, E.; Hughes, S.; Huang, T.; Mahabal, A.
2015-12-01
Currently, the analysis of large data collections is executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Data collection, archiving and analysis from future remote sensing missions, be it from earth science satellites, planetary robotic missions, or massive radio observatories may not scale as more capable instruments stress existing architectural approaches and systems due to more continuous data streams, data from multiple observational platforms, and measurements and models from different agencies. A new paradigm is needed in order to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural choices, data processing, management, analysis, etc are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections. Future observational systems, including satellite and airborne experiments, and research in climate modeling will significantly increase the size of the data requiring new methodological approaches towards data analytics where users can more effectively interact with the data and apply automated mechanisms for data reduction, reduction and fusion across these massive data repositories. This presentation will discuss architecture, use cases, and approaches for developing a big data analytics strategy across multiple science disciplines.
Antimicrobial Activity of a Neem Cake Extract in a Broth Model Meat System
Del Serrone, Paola; Nicoletti, Marcello
2013-01-01
This work reports on the antimicrobial activity of an ethyl acetate extract of neem (Azadirachta indica) cake (NCE) against bacteria affecting the quality of retail fresh meat in a broth model meat system. NCE (100 µg) was also tested by the agar disc diffusion method. It inhibited the growth of all tested microorganisms. The NCE growth inhibition zone (IZ) ranged 11.33–22.67 mm while the ciprofloxacin (10 µg) IZ ranged from 23.41–32.67 mm. There was no significant difference (p ≤ 0.05) between the antimicrobial activity of NCE and ciprofloxacin vs. C. jejuni and Leuconostoc spp. The NCE antibacterial activity was moreover determined at lower concentrations (1:10–1:100,000) in micro-assays. The percent growth reduction ranged from 61 ± 2.08–92 ± 3.21. The higher bacterial growth reduction was obtained at 10 µg concentration of NCE. Species-specific PCR and multiplex PCR with the DNA dye propidium monoazide were used to directly detect viable bacterial cells from experimentally contaminated meat samples. The numbers of bacterial cells never significantly (p ≤ 0.05) exceeded the inocula concentration used to experimentally contaminate the NCE treated meat. This report represents a screening methodology to evaluate the antimicrobial capability of a herbal extract to preserve meat. PMID:23917814
Broadband ground-motion simulation using a hybrid approach
Graves, R.W.; Pitarka, A.
2010-01-01
This paper describes refinements to the hybrid broadband ground-motion simulation methodology of Graves and Pitarka (2004), which combines a deterministic approach at low frequencies (f 1 Hz). In our approach, fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time. The prescribed slip distribution is constrained to follow an inverse wavenumber-squared fall-off and the average rupture speed is set at 80% of the local shear-wave velocity, which is then adjusted such that the rupture propagates faster in regions of high slip and slower in regions of low slip. We use a Kostrov-like slip-rate function having a rise time proportional to the square root of slip, with the average rise time across the entire fault constrained empirically. Recent observations from large surface rupturing earthquakes indicate a reduction of rupture propagation speed and lengthening of rise time in the near surface, which we model by applying a 70% reduction of the rupture speed and increasing the rise time by a factor of 2 in a zone extending from the surface to a depth of 5 km. We demonstrate the fidelity of the technique by modeling the strong-motion recordings from the Imperial Valley, Loma Prieta, Landers, and Northridge earthquakes.
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Levecke, Bruno; Kaplan, Ray M; Thamsborg, Stig M; Torgerson, Paul R; Vercruysse, Jozef; Dobson, Robert J
2018-04-15
Although various studies have provided novel insights into how to best design, analyze and interpret a fecal egg count reduction test (FECRT), it is still not straightforward to provide guidance that allows improving both the standardization and the analytical performance of the FECRT across a variety of both animal and nematode species. For example, it has been suggested to recommend a minimum number of eggs to be counted under the microscope (not eggs per gram of feces), but we lack the evidence to recommend any number of eggs that would allow a reliable assessment of drug efficacy. Other aspects that need further research are the methodology of calculating uncertainty intervals (UIs; confidence intervals in case of frequentist methods and credible intervals in case of Bayesian methods) and the criteria of classifying drug efficacy into 'normal', 'suspected' and 'reduced'. The aim of this study is to provide complementary insights into the current knowledge, and to ultimately provide guidance in the development of new standardized guidelines for the FECRT. First, data were generated using a simulation in which the 'true' drug efficacy (TDE) was evaluated by the FECRT under varying scenarios of sample size, analytic sensitivity of the diagnostic technique, and level of both intensity and aggregation of egg excretion. Second, the obtained data were analyzed with the aim (i) to verify which classification criteria allow for reliable detection of reduced drug efficacy, (ii) to identify the UI methodology that yields the most reliable assessment of drug efficacy (coverage of TDE) and detection of reduced drug efficacy, and (iii) to determine the required sample size and number of eggs counted under the microscope that optimizes the detection of reduced efficacy. Our results confirm that the currently recommended criteria for classifying drug efficacy are the most appropriate. Additionally, the UI methodologies we tested varied in coverage and ability to detect reduced drug efficacy, thus a combination of UI methodologies is recommended to assess the uncertainty across all scenarios of drug efficacy estimates. Finally, based on our model estimates we were able to determine the required number of eggs to count for each sample size, enabling investigators to optimize the probability of correctly classifying a theoretical TDE while minimizing both financial and technical resources. Copyright © 2018 Elsevier B.V. All rights reserved.
Estimating the Fiscal Effects of Public Pharmaceutical Expenditure Reduction in Greece
Souliotis, Kyriakos; Papageorgiou, Manto; Politi, Anastasia; Frangos, Nikolaos; Tountas, Yiannis
2015-01-01
The purpose of the present study is to estimate the impact of pharmaceutical spending reduction on public revenue, based on data from the national health accounts as well as on reports of Greece’s organizations. The methodology of the analysis is structured in two basic parts. The first part presents the urgency for rapid cutbacks on public pharmaceutical costs due to the financial crisis and provides a conceptual framework for the contribution of the Greek pharmaceutical branch to the country’s economy. In the second part, we perform a quantitative analysis for the estimation of multiplier effects of public pharmaceutical expenditure reduction on main revenue sources, such as taxes and social contributions. We also fit projection models with multipliers as regressands for the evaluation of the efficiency of the particular fiscal measure in the short run. According to the results, nearly half of the gains from the measure’s application is offset by financially equivalent decreases in the government’s revenue, i.e., losses in tax revenues and social security contributions alone, not considering any other direct or indirect costs. The findings of multipliers’ high value and increasing short-term trend imply the measure’s inefficiency henceforward and signal the risk of vicious circles that will provoke the economy’s deprivation of useful resources. PMID:26380249
Kelly, Jarod C; Sullivan, John L; Burnham, Andrew; Elgowainy, Amgad
2015-10-20
This study examines the vehicle-cycle and vehicle total life-cycle impacts of substituting lightweight materials into vehicles. We determine part-based greenhouse gas (GHG) emission ratios by collecting material substitution data and evaluating that alongside known mass-based GHG ratios (using and updating Argonne National Laboratory's GREET model) associated with material pair substitutions. Several vehicle parts are lightweighted via material substitution, using substitution ratios from a U.S. Department of Energy report, to determine GHG emissions. We then examine fuel-cycle GHG reductions from lightweighting. The fuel reduction value methodology is applied using FRV estimates of 0.15-0.25, and 0.25-0.5 L/(100km·100 kg), with and without powertrain adjustments, respectively. GHG breakeven values are derived for both driving distance and material substitution ratio. While material substitution can reduce vehicle weight, it often increases vehicle-cycle GHGs. It is likely that replacing steel (the dominant vehicle material) with wrought aluminum, carbon fiber reinforced plastic (CRFP), or magnesium will increase vehicle-cycle GHGs. However, lifetime fuel economy benefits often outweigh the vehicle-cycle, resulting in a net total life-cycle GHG benefit. This is the case for steel replaced by wrought aluminum in all assumed cases, and for CFRP and magnesium except for high substitution ratio and low FRV.
NASA Astrophysics Data System (ADS)
Pawar, Prashant M.; Jung, Sung Nam
2009-03-01
In this work, an active vibration reduction of hingeless composite rotor blades with dissimilarity is investigated using the active twist concept and the optimal control theory. The induced shear strain on the actuation mechanism by the piezoelectric constant d15 from the PZN-8% PT-based single-crystal material is used to achieve more active twisting to suppress the extra vibrations. The optimal control algorithm is based on the minimization of an objective function comprised of quadratic functions of vibratory hub loads and voltage control harmonics. The blade-to-blade dissimilarity is modeled using the stiffness degradation of composite blades. The optimal controller is applied to various possible dissimilarities arising from different damage patterns of composite blades. The governing equations of motion are derived using Hamilton's principle. The effects of composite materials and smart actuators are incorporated into the comprehensive aeroelastic analysis system. Numerical results showing the impact of addressing the blade dissimilarities on hub vibrations and voltage inputs required to suppress the vibrations are demonstrated. It is observed that all vibratory shear forces are reduced considerably and the major harmonics of moments are reduced significantly. However, the controller needs further improvement to suppress 1/rev moment loads. A mechanism to achieve vibration reduction for the dissimilar rotor system has also been identified.
Estimating the Fiscal Effects of Public Pharmaceutical Expenditure Reduction in Greece.
Souliotis, Kyriakos; Papageorgiou, Manto; Politi, Anastasia; Frangos, Nikolaos; Tountas, Yiannis
2015-01-01
The purpose of the present study is to estimate the impact of pharmaceutical spending reduction on public revenue, based on data from the national health accounts as well as on reports of Greece's organizations. The methodology of the analysis is structured in two basic parts. The first part presents the urgency for rapid cutbacks on public pharmaceutical costs due to the financial crisis and provides a conceptual framework for the contribution of the Greek pharmaceutical branch to the country's economy. In the second part, we perform a quantitative analysis for the estimation of multiplier effects of public pharmaceutical expenditure reduction on main revenue sources, such as taxes and social contributions. We also fit projection models with multipliers as regressands for the evaluation of the efficiency of the particular fiscal measure in the short run. According to the results, nearly half of the gains from the measure's application is offset by financially equivalent decreases in the government's revenue, i.e., losses in tax revenues and social security contributions alone, not considering any other direct or indirect costs. The findings of multipliers' high value and increasing short-term trend imply the measure's inefficiency henceforward and signal the risk of vicious circles that will provoke the economy's deprivation of useful resources.
Grau, P; Vanrolleghem, P; Ayesa, E
2007-01-01
In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.
Force 2025 and Beyond Strategic Force Design Analytic Model
2017-01-12
depiction of the core ideas of our force design model. Figure 1: Description of Force Design Model Figure 2 shows an overview of our methodology ...the F2025B Force Design Analytic Model research conducted by TRAC- MTRY and the Naval Postgraduate School. Our research develops a methodology for...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Acevedo, Walter; Fallah, Bijan; Reich, Sebastian; Cubasch, Ulrich
2017-05-01
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in the model. This result might help the dendrochronology community to optimize their sampling efforts.
Combining information from multiple flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Amperometric, Bipotentiometric, and Coulometric Titration.
ERIC Educational Resources Information Center
Stock, John T.
1984-01-01
Reviews literature on amperometric, bipotentiometric, and coulometric titration methods examining: apparatus and methodology; acid-base reactions; precipitation and complexing reactions (considering methods involving silver, mercury, EDTA or analogous reagents, and other organic compounds); and oxidation-reduction reactions (considering methods…
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Cooper, Guy Paul; Yeager, Violet; Burkle, Frederick M.; Subbarao, Italo
2015-01-01
Background: This article describes a novel triangulation methodological approach for identifying twitter activity of regional active twitter users during the 2013 Hattiesburg EF-4 Tornado. Methodology: A data extraction and geographically centered filtration approach was utilized to generate Twitter data for 48 hrs pre- and post-Tornado. The data was further validated using six sigma approach utilizing GPS data. Results: The regional analysis revealed a total of 81,441 tweets, 10,646 Twitter users, 27,309 retweets and 2637 tweets with GPS coordinates. Conclusions: Twitter tweet activity increased 5 fold during the response to the Hattiesburg Tornado. Retweeting activity increased 2.2 fold. Tweets with a hashtag increased 1.4 fold. Twitter was an effective disaster risk reduction tool for the Hattiesburg EF-4 Tornado 2013. PMID:26203396
Noise and performance calibration study of a Mach 2.2 supersonic cruise aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.; Maglieri, D. J.
1979-01-01
The baseline configuration of a Mach 2.2 supersonic cruise concept employing a 1980 - 1985 technology level, dry turbojet, mechanically suppressed engine, was calibrated to identify differences in noise levels and performance as determined by the methodology and ground rules used. In addition, economic and noise information is provided consistent with a previous study based on an advanced technology Mach 2.7 configuration, reported separately. Results indicate that the difference between NASA and manufacturer performance methodology is small. Resizing the aircraft to NASA groundrules results in negligible changes in takeoff noise levels (less than 1 EPNdB) but approach noise is reduced by 5.3 EPNdB as a result of increasing approach speed. For the power setting chosen, engine oversizing resulted in no reduction in traded noise. In terms of summated noise level, a 6 EPNdB reduction is realized for a 5% increase in total operating costs.
A comprehensive plan for helicopter drag reduction
NASA Technical Reports Server (NTRS)
Williams, R. M.; Montana, P. S.
1975-01-01
Current helicopters have parasite drag levels 6 to 10 times as great as fixed wing aircraft. The commensurate poor cruise efficiency results in a substantial degradation of potential mission capability. The paper traces the origins of helicopter drag and shows that the problem (primarily due to bluff body flow separation) can be solved by the adoption of a comprehensive research and development plan. This plan, known as the Fuselage Design Methodology, comprises both nonaerodynamic and aerodynamic aspects. The aerodynamics are discussed in detail and experimental and analytical programs are described which will lead to a solution of the bluff body problem. Some recent results of work conducted at the Naval Ship Research and Development Center (NSRDC) are presented to illustrate these programs. It is concluded that a 75-per cent reduction of helicopter drag is possible by the full implementation of the Fuselage Design Methodology.
A framework for assessing the adequacy and effectiveness of software development methodologies
NASA Technical Reports Server (NTRS)
Arthur, James D.; Nance, Richard E.
1990-01-01
Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.
NASA Astrophysics Data System (ADS)
Tzabiras, John; Spiliotopoulos, Marios; Kokkinos, Kostantinos; Fafoutis, Chrysostomos; Sidiropoulos, Pantelis; Vasiliades, Lampros; Papaioannou, George; Loukas, Athanasios; Mylopoulos, Nikitas
2015-04-01
The overall objective of this work is the development of an Information System which could be used by stakeholders for the purposes of water management as well as for planning and strategic decision-making in semi-arid areas. An integrated modeling system has been developed and applied to evaluate the sustainability of water resources management strategies in Lake Karla watershed, Greece. The modeling system, developed in the framework of "HYDROMENTOR" research project, is based on a GIS modelling approach which uses remote sensing data and includes coupled models for the simulation of surface water and groundwater resources, the operation of hydrotechnical projects (reservoir operation and irrigation works) and the estimation of water demands at several spatial scales. Lake Karla basin was the region where the system was tested but the methodology may be the basis for future analysis elsewhere. Τwo (2) base and three (3) management scenarios were investigated. In total, eight (8) water management scenarios were evaluated: i) Base scenario without operation of the reservoir and the designed Lake Karla district irrigation network (actual situation) • Reduction of channel losses • Alteration of irrigation methods • Introduction of greenhouse cultivation ii) Base scenario including the operation of the reservoir and the Lake Karla district irrigation network • Reduction of channel losses • Alteration of irrigation methods • Introduction of greenhouse cultivation The results show that, under the existing water resources management, the water deficit of Lake Karla watershed is very large. However, the operation of the reservoir and the cooperative Lake Karla district irrigation network coupled with water demand management measures, like reduction of water distribution system losses and alteration of irrigation methods, could alleviate the problem and lead to sustainable and ecological use of water resources in the study area. Acknowledgements: This study has been supported by the research project "Hydromentor" funded by the Greek General Secretariat of Research and Technology in the framework of the E.U. co-funded National Action "Cooperation"
Lindner, M; Gramer, G; Garbade, S F; Burgard, P
2009-08-01
Tetrahydrobiopterin (BH(4)) cofactor loading is a standard procedure to differentiate defects of BH(4) metabolism from phenylalanine hydroxylase (PAH) deficiency. BH(4) responsiveness also exists in PAH-deficient patients with high residual PAH activity. Unexpectedly, single cases with presumed nil residual PAH activity have been reported to be BH(4) responsive, too. BH(4) responsiveness has been defined either by a >or=30% reduction of blood Phe concentration after a single BH(4) dose or by a decline greater than the individual circadian Phe level variation. Since both methods have methodological disadvantages, we present a model of statistical process control (SPC) to assess BH(4) responsiveness. Phe levels in 17 adult PKU patients of three phenotypic groups off diet were compared without and with three different single oral dosages of BH(4) applied in a double-blind randomized cross-over design. Results are compared for >or=30% reduction and SPC. The effect of BH(4) by >or=30% reduction was significant for groups (p < 0.01) but not for dose (p = 0.064), with no interaction of group with dose (p = 0.24). SPC revealed significant effects for group (p < 0.01) and the interaction for group with dose (p < 0.05) but not for dose alone (p = 0.87). After one or more loadings, seven patients would be judged to be BH(4) responsive either by the 30% criterion or by the SPC model, but only three by both. Results for patients with identical PAH genotype were not very consistent within (for different BH(4) doses) and between the two models. We conclude that a comparison of protein loadings without and with BH(4) combined with a standardized procedure for data analysis and decision would increase the reliability of diagnostic results.
Opiate treatment for opiate withdrawal in newborn infants.
Osborn, David A; Jeffery, Heather E; Cole, Michael J
2010-10-06
Neonatal abstinence syndrome (NAS) due to opiate withdrawal may result in disruption of the mother-infant relationship, sleep-wake abnormalities, feeding difficulties, weight loss and seizures. To assess the effectiveness and safety of using an opiate compared to a sedative or non-pharmacological treatment for treatment of NAS due to withdrawal from opiates. The review was updated in 2010 with additional searches CENTRAL, MEDLINE and EMBASE supplemented by searches of conference abstracts and citation lists of published articles. Randomized or quasi-randomized controlled trials of opiate treatment in infants with NAS born to mothers with opiate dependence. Each author assessed study quality and extracted data independently. Nine studies enrolling 645 infants met inclusion criteria. There were substantial methodological concerns in all studies comparing an opiate with a sedative. Two small studies comparing different opiates were of good methodology.Opiate (morphine) versus supportive care (one study): A reduction in time to regain birth weight and duration of supportive care and a significant increase in hospital stay was noted.Opiate versus phenobarbitone (four studies): Meta-analysis found no significant difference in treatment failure. One study reported opiate treatment resulted in a significant reduction in treatment failure in infants of mothers using only opiates. One study reported a significant reduction in days treatment and admission to the nursery for infants receiving morphine. One study reported a reduction in seizures, of borderline statistical significance, with the use of opiate.Opiate versus diazepam (two studies): Meta-analysis found a significant reduction in treatment failure with the use of opiate.Different opiates (six studies): there is insufficient data to determine safety or efficacy of any specific opiate compared to another opiate. Opiates compared to supportive care may reduce time to regain birth weight and duration of supportive care but increase duration of hospital stay. When compared to phenobarbitone, opiates may reduce the incidence of seizures but there is no evidence of effect on treatment failure. One study reported a reduction in duration of treatment and nursery admission for infants on morphine. Compared to diazepam, opiates reduce the incidence of treatment failure. A post-hoc analysis generates the hypothesis that initial opiate treatment may be restricted to infants of mothers who used opiates only. In view of the methodologic limitations of the included studies the conclusions of this review should be treated with caution.
Villanti, Andrea C; Feirman, Shari P; Niaura, Raymond S; Pearson, Jennifer L; Glasser, Allison M; Collins, Lauren K; Abrams, David B
2018-03-01
To propose a hierarchy of methodological criteria to consider when determining whether a study provides sufficient information to answer the question of whether e-cigarettes can facilitate cigarette smoking cessation or reduction. A PubMed search to 1 February 2017 was conducted of all studies related to e-cigarettes and smoking cessation or reduction. Australia, Europe, Iran, Korea, New Zealand and the United States. 91 articles. Coders organized studies according to six proposed methodological criteria: (1) examines outcome of interest (cigarette abstinence or reduction), (2) assesses e-cigarette use for cessation as exposure of interest, (3) employs appropriate control/comparison groups, (4) ensures that measurement of exposure precedes the outcome, (5) evaluates dose and duration of the exposure and (6) evaluates the type and quality of the e-cigarette used. Twenty-four papers did not examine the outcomes of interest. Forty did not assess the specific reason for e-cigarette use as an exposure of interest. Twenty papers did not employ prospective study designs with appropriate comparison groups. The few observational studies meeting some of the criteria (duration, type, use for cessation) triangulated with findings from three randomized trials to suggest that e-cigarettes can help adult smokers quit or reduce cigarette smoking. Only a small proportion of studies seeking to address the effect of e-cigarettes on smoking cessation or reduction meet a set of proposed quality standards. Those that do are consistent with randomized controlled trial evidence in suggesting that e-cigarettes can help with smoking cessation or reduction. © 2017 Society for the Study of Addiction.
Using systems science for population health management in primary care.
Li, Yan; Kong, Nan; Lawley, Mark A; Pagán, José A
2014-10-01
Population health management is becoming increasingly important to organizations managing and providing primary care services given ongoing changes in health care delivery and payment systems. The objective of this study is to show how systems science methodologies could be incorporated into population health management to compare different interventions and improve health outcomes. The New York Academy of Medicine Cardiovascular Health Simulation model (an agent-based model) and data from the Behavioral Risk Factor Surveillance System were used to evaluate a lifestyle program that could be implemented in primary care practice settings. The program targeted Medicare-age adults and focused on improving diet and exercise and reducing weight. The simulation results suggest that there would be significant reductions projected in the proportion of the Medicare-age population with diabetes after the implementation of the proposed lifestyle program for a relatively long term (3 and 5 years). Similar results were found for the subpopulations with high cholesterol, but the proposed intervention would not have a significant effect in the proportion of the population with hypertension over a time period of <5 years. Systems science methodologies can be useful to compare the health outcomes of different interventions. These tools can become an important component of population health management because they can help managers and other decision makers evaluate alternative programs in primary care settings. © The Author(s) 2014.
Yogurt for treating antibiotic-associated diarrhea: Systematic review and meta-analysis.
Patro-Golab, Bernadeta; Shamir, Raanan; Szajewska, Hania
2015-06-01
Antibiotic-associated diarrhea (AAD) is a common complication in individuals treated with antibiotics. The aim of this review was to systematically evaluate the efficacy of yogurt consumption for the prevention of AAD. In this systematic review, a number of databases including MEDLINE, EMBASE, and the Cochrane Library, with no language restrictions, were searched up to September 2014 for randomized controlled trials (RCTs) evaluating the effect of yogurt consumption in adults and children who were receiving antibiotics. The risk for bias was assessed using the Cochrane risk of bias tool. Two RCTs, both low in methodological quality, were included. Compared with no intervention, yogurt consumption reduced the risk for diarrhea in the fixed effect model (two RCTs, n = 314, relative risk [RR], 0.56; 95% confidence interval [CI], 0.31-1.00). Significant heterogeneity between the trials was detected (I(2) = 67%). The significant reduction in the risk for diarrhea was lost in the random effects model (RR, 0.45; 95% CI, 0.11-1.75). Given the simple nature of the intervention, the scarcity of data is noteworthy. No consistent effect of yogurt consumption for preventing AAD was shown. However, the data are limited and the included trials had methodological limitations. Results from large, rigorously designed RCTs are needed to assess the effect of yogurt consumption on AAD prevention. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Güth, Dirk; Schamoni, Markus; Maas, Jürgen
2013-09-01
No-load losses within brakes and clutches based on magnetorheological fluids are unavoidable and represent a major barrier towards their wide-spread commercial adoption. Completely torque free rotation is not yet possible due to persistent fluid contact within the shear gap. In this paper, a novel concept is presented that facilitates the controlled movement of the magnetorheological fluid from an active, torque-transmitting region into an inactive region of the shear gap. This concept enables complete decoupling of the fluid engaging surfaces such that viscous drag torque can be eliminated. In order to achieve the desired effect, motion in the magnetorheological fluid is induced by magnetic forces acting on the fluid, which requires an appropriate magnetic circuit design. In this investigation, we propose a methodology to determine suitable magnetic circuit designs with well-defined fail-safe behavior. The magnetically induced motion of magnetorheological fluids is modeled by the use of the Kelvin body force, and a multi-physics domain simulation is performed to elucidate various transitions between an engaged and disengaged operating mode. The modeling approach is validated by captured high-speed video frames which show the induced motion of the magnetorheological fluid due to the magnetic field. Finally, measurements performed with a prototype actuator prove that the induced viscous drag torque can be reduced significantly by the proposed magnetic fluid control methodology.
Analysis and design of a 3rd order velocity-controlled closed-loop for MEMS vibratory gyroscopes.
Wu, Huan-ming; Yang, Hai-gang; Yin, Tao; Jiao, Ji-wei
2013-09-18
The time-average method currently available is limited to analyzing the specific performance of the automatic gain control-proportional and integral (AGC-PI) based velocity-controlled closed-loop in a micro-electro-mechanical systems (MEMS) vibratory gyroscope, since it is hard to solve nonlinear functions in the time domain when the control loop reaches to 3rd order. In this paper, we propose a linearization design approach to overcome this limitation by establishing a 3rd order linear model of the control loop and transferring the analysis to the frequency domain. Order reduction is applied on the built linear model's transfer function by constructing a zero-pole doublet, and therefore mathematical expression of each control loop's performance specification is obtained. Then an optimization methodology is summarized, which reveals that a robust, stable and swift control loop can be achieved by carefully selecting the system parameters following a priority order. Closed-loop drive circuits are designed and implemented using 0.35 μm complementary metal oxide semiconductor (CMOS) process, and experiments carried out on a gyroscope prototype verify the optimization methodology that an optimized stability of the control loop can be achieved by constructing the zero-pole doublet, and disturbance rejection capability (D.R.C) of the control loop can be improved by increasing the integral term.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
NASA Astrophysics Data System (ADS)
Westermayer, C.; Schirrer, A.; Hemedi, M.; Kozek, M.
2013-12-01
An ℋ∞ full information feedforward design approach for longitudinal motion prefilter design of a large flexible blended wing body (BWB) aircraft is presented. An existing onset is extended such that specifications concerning command tracking, limited control energy, and manoeuvre load reduction can be addressed simultaneously. Therefore, the utilized design architecture is provided and manual tuning aspects are considered. In order to increase controller tuning efficiency, an automated tuning process based on several optimization criteria is proposed. Moreover, two design methodologies for the parameter-varying design case are investigated. The obtained controller is validated on a high-order nonlinear model, indicating the high potential of the presented approach for flexible aircraft control.
Photovoltaic System Pricing Trends: Historical, Recent, and Near-Term Projections 2015 Edition
Feldman, David; Barbose, Galen; Margolis, Robert; Bolinger, Mark; Chung, Donald; Fu, Ran; Seel, Joachim; Davidson, Carolyn; Wiser, Ryan
2016-05-13
This is the fourth edition in an annual briefing prepared jointly by LBNL and NREL intended to provide a high-level overview of historical, recent, and projected near-term PV system pricing trends in the United States. The briefing draws on several ongoing research activities at the two labs, including LBNL's annual Tracking the Sun report series, NREL's bottom-up PV cost modeling, and NREL's synthesis of PV market data and projections. The briefing examines progress in PV price reductions to help DOE and other PV stakeholders manage the transition to a market-driven PV industry, and integrates different perspectives and methodologies for characterizing PV system pricing, in order to provide a broader perspective on underlying trends within the industry.
Mukherji, Suresh K
2014-04-01
An accountable care organization is a form of a managed care organization in which a group of networked health care providers, which may include hospitals, group practices, networks of practices, hospital-provider partnerships, or joint ventures, are accountable for the health care of a defined group of patients. Initial results of the institutions participating in CMS's Physician Group Demonstration Project did not demonstrate a substantial reduction in imaging that could be directly attributed to the accountable care organization model. However, the initial results suggest that incentive-based methodology appears to be successful for increasing compliance for measuring quality metrics. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Methodology for the Assessment of 3D Conduction Effects in an Aerothermal Wind Tunnel Test
NASA Technical Reports Server (NTRS)
Oliver, Anthony Brandon
2010-01-01
This slide presentation reviews a method for the assessment of three-dimensional conduction effects during test in a Aerothermal Wind Tunnel. The test objectives were to duplicate and extend tests that were performed during the 1960's on thermal conduction on proturberance on a flat plate. Slides review the 1D versus 3D conduction data reduction error, the analysis process, CFD-based analysis, loose coupling method that simulates a wind tunnel test run, verification of the CFD solution, Grid convergence, Mach number trend, size trends, and a Sumary of the CFD conduction analysis. Other slides show comparisons to pretest CFD at Mach 1.5 and 2.16 and the geometries of the models and grids.
Advanced electric motor technology: Flux mapping
NASA Technical Reports Server (NTRS)
Doane, George B., III; Campbell, Warren; Brantley, Larry W.; Dean, Garvin
1992-01-01
This report contains the assumptions, mathematical models, design methodology, and design points involved with the design of an electromechanical actuator (EMA) suitable for directing the thrust vector of a large MSFC/NASA launch vehicle. Specifically the design of such an actuator for use on the upcoming liquid fueled National Launch System (NLS) is considered culminating in a point design of both the servo system and the electric motor needed. A major thrust of the work is in selecting spur gear and roller screw reduction ratios to achieve simultaneously wide bandwidth, maximum power transfer, and disturbance rejection while meeting specified horsepower requirements at a given stroking speed as well as a specified maximum stall force. An innovative feedback signal is utilized in meeting these diverse objectives.
Utilization of non-conventional systems for conversion of biomass to food components
NASA Technical Reports Server (NTRS)
Karel, M.; Nakhost, Z.
1989-01-01
The potential use of micro-algae in yielding useful macronutrients for the CELSS is investigated. Algal proteins were isolated and characterized from green algae (Scenedesmus obliquus) grown under controlled conditions. The RNA and DNA contents were determined, and methodology for reduction of the nucleic acid content to acceptable levels developed. Lipid extraction procedures using supercritical fluids were tailored to removal of undesirable lipids and pigments. Initial steps toward preparation of model foods for potential use in the CELSS were taken. The goal was to fabricate food products which contain isolated algal macronutrients such as proteins and lipids and also some components derived from higher plants including wheat flour, soy flour, potato powder (flakes), soy oil, and corn syrup.
NASA Technical Reports Server (NTRS)
Braen, C.
1978-01-01
The economic experiment, the results obtained to date and the work which still remains to be done are summarized. Specifically, the experiment design is described in detail as are the developed data collection methodology and procedures, sampling plan, data reduction techniques, cost and loss models, establishment of frost severity measures, data obtained from citrus growers, National Weather Service and Federal Crop Insurance Corp. Resulting protection costs and crop losses for the control group sample, extrapolation of results of control group to the Florida citrus industry and the method for normalization of these results to a normal or average frost season so that results may be compared with anticipated similar results from test group measurements are discussed.
Azami-Aghdash, Saber; Sadeghi-Bazarghani, Homayoun; Heydari, Mahdiyeh; Rezapour, Ramin; Derakhshani, Naser
2018-04-01
To review the effectiveness of Road Traffic Injuries (RTIs) interventions implemented for prevention of RTIs in Iran and to introduce some methodological issues. Required data in this systematic review study were collected through searching the following key words: "Road Traffic Injuries", "Road Traffic accidents", "Road Traffic crashes", "prevention", and Iran in PubMed, Cochrane Library electronic databases, Google Scholar, Scopus, MagIran, SID and IranMedex. Some of the relevant journals and web sites searched manually. Reference lists of the selected articles were also checked. Gray literature search and expert contact was also conducted. Out of 569 retrieved articles, finally 8 articles included. Among the included studies the effectiveness of 10 interventions were assessed containing: seat belt, enforcements of laws and legislations, educational program, wearing helmet, Antilock Braking System (ABS), motorcyclists' penalty enforcement, pupil liaisons' education, provisional driver licensing, Road bumps and traffic improvement's plans. In 7 studies (9 interventions) reduction of RTIs rate were reported. Decreased rate of mortality from RTIs were reported in three studies. Only one study had mentioned financial issue (Anti-lock Brake System intervention). Inadequate data sources, inappropriate selection of statistical index and not mention about the control of Confounding Variables (CV), the most common methodological issues were. The results of most interventional studies conducted in Iran supported the effect of the interventions on reduction of RTIs. However due to some methodological or reporting shortcoming the results of these studies should be interpreted cautiously.
Modeling the Impacts of Solar Distributed Generation on U.S. Water Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amanda, Smith; Omitaomu, Olufemi A; Jaron, Peck
2015-01-01
Distributed electric power generation technologies typically use little or no water per unit of electrical energy produced; in particular, renewable energy sources such as solar PV systems do not require cooling systems and present an opportunity to reduce water usage for power generation. Within the US, the fuel mix used for power generation varies regionally, and certain areas use more water for power generation than others. The need to reduce water usage for power generation is even more urgent in view of climate change uncertainties. In this paper, we present an example case within the state of Tennessee, one ofmore » the top four states in water consumption for power generation and one of the states with little or no potential for developing centralized renewable energy generations. The potential for developing PV generation within Knox County, Tennessee, is studied, along with the potential for reducing water withdrawal and consumption within the Tennessee Valley stream region. Electric power generation plants in the region are quantified for their electricity production and expected water withdrawal and consumption over one year, where electrical generation data is provided over one year and water usage is modeled based on the cooling system(s) in use. Potential solar PV electrical production is modeled based on LiDAR data and weather data for the same year. Our proposed methodology can be summarized as follows: First, the potential solar generation is compared against the local grid demand. Next, electrical generation reductions are specified that would result in a given reduction in water withdrawal and a given reduction in water consumption, and compared with the current water withdrawal and consumption rates for the existing fuel mix. The increase in solar PV development that would produce an equivalent amount of power, is determined. In this way, we consider how targeted local actions may affect the larger stream region through thoughtful energy development. This model can be applied to other regions, other types of distributed generation, and used as a framework for modeling alternative growth scenarios in power production capacity in addition to modeling adjustments to existing capacity.« less
Korostil, Igor A; Peters, Gareth W; Law, Matthew G; Regan, David G
2013-04-08
Deterministic dynamic compartmental transmission models (DDCTMs) of human papillomavirus (HPV) transmission have been used in a number of studies to estimate the potential impact of HPV vaccination programs. In most cases, the models were built under the assumption that an individual who cleared HPV infection develops (life-long) natural immunity against re-infection with the same HPV type (this is known as SIR scenario). This assumption was also made by two Australian modelling studies evaluating the impact of the National HPV Vaccination Program to assist in the health-economic assessment of male vaccination. An alternative view denying natural immunity after clearance (SIS scenario) was only presented in one study, although neither scenario has been supported by strong evidence. Some recent findings, however, provide arguments in favour of SIS. We developed HPV transmission models implementing life-time (SIR), limited, and non-existent (SIS) natural immunity. For each model we estimated the herd immunity effect of the ongoing Australian HPV vaccination program and its extension to cover males. Given the Australian setting, we aimed to clarify the extent to which the choice of model structure would influence estimation of this effect. A statistically robust and efficient calibration methodology was applied to ensure credibility of our results. We observed that for non-SIR models the herd immunity effect measured in relative reductions in HPV prevalence in the unvaccinated population was much more pronounced than for the SIR model. For example, with vaccine efficacy of 95% for females and 90% for males, the reductions for HPV-16 were 3% in females and 28% in males for the SIR model, and at least 30% (females) and 60% (males) for non-SIR models. The magnitude of these differences implies that evaluations of the impact of vaccination programs using DDCTMs should incorporate several model structures until our understanding of natural immunity is improved. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling of crude oil biodegradation using two phase partitioning bioreactor.
Fakhru'l-Razi, A; Peyda, Mazyar; Ab Karim Ghani, Wan Azlina Wan; Abidin, Zurina Zainal; Zakaria, Mohamad Pauzi; Moeini, Hassan
2014-01-01
In this work, crude oil biodegradation has been optimized in a solid-liquid two phase partitioning bioreactor (TPPB) by applying a response surface methodology based d-optimal design. Three key factors including phase ratio, substrate concentration in solid organic phase, and sodium chloride concentration in aqueous phase were taken as independent variables, while the efficiency of the biodegradation of absorbed crude oil on polymer beads was considered to be the dependent variable. Commercial thermoplastic polyurethane (Desmopan®) was used as the solid phase in the TPPB. The designed experiments were carried out batch wise using a mixed acclimatized bacterial consortium. Optimum combinations of key factors with a statistically significant cubic model were used to maximize biodegradation in the TPPB. The validity of the model was successfully verified by the good agreement between the model-predicted and experimental results. When applying the optimum parameters, gas chromatography-mass spectrometry showed a significant reduction in n-alkanes and low molecular weight polycyclic aromatic hydrocarbons. This consequently highlights the practical applicability of TPPB in crude oil biodegradation. © 2014 American Institute of Chemical Engineers.
Impact of uncertainty on modeling and testing
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Brown, Kendall K.
1995-01-01
A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
NASA Astrophysics Data System (ADS)
Zhang, Xi; Lu, Jinling; Yuan, Shifei; Yang, Jun; Zhou, Xuan
2017-03-01
This paper proposes a novel parameter identification method for the lithium-ion (Li-ion) battery equivalent circuit model (ECM) considering the electrochemical properties. An improved pseudo two-dimension (P2D) model is established on basis of partial differential equations (PDEs), since the electrolyte potential is simplified from the nonlinear to linear expression while terminal voltage can be divided into the electrolyte potential, open circuit voltage (OCV), overpotential of electrodes, internal resistance drop, and so on. The model order reduction process is implemented by the simplification of the PDEs using the Laplace transform, inverse Laplace transform, Pade approximation, etc. A unified second order transfer function between cell voltage and current is obtained for the comparability with that of ECM. The final objective is to obtain the relationship between the ECM resistances/capacitances and electrochemical parameters such that in various conditions, ECM precision could be improved regarding integration of battery interior properties for further applications, e.g., SOC estimation. Finally simulation and experimental results prove the correctness and validity of the proposed methodology.
Reduction of adverse aerodynamic effects of large trucks, Volume I. Technical report
DOT National Transportation Integrated Search
1978-09-01
The overall objective of this study has been to develop methods of minimizing three aerodynamic-related phenomena: truck-induced aerodynamic disturbances, splash, and spray. An analytical methodology has been developed and used to characterize aerody...
LORENZ: a system for planning long-bone fracture reduction
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Burgstaller, Wolfgang; Wirth, Joachim; Baumann, Bernard; Jacob, Augustinus L.; Bieri, Kurt; Traud, Stefan; Strub, Michael; Regazzoni, Pietro; Messmer, Peter
2003-05-01
Long bone fractures belong to the most common injuries encountered in clinical routine trauma surgery. Preoperative assessment and decision making is usually based on standard 2D radiographs of the injured limb. Taking into account that a 3D - imaging modality such as computed tomography (CT) is not used for diagnosis in clinical routine, we have designed LORENZ, a fracture reduction planning tool based on such standard radiographs. Taking into account the considerable success of so-called image free navigation systems for total knee replacement in orthopaedic surgery, we assume that a similar tool for long bone fracture reposition should have considerable impact on computer-aided trauma surgery in a standard clinical routine setup. The case for long bone fracture reduction is, however, somewhat more complicated since not only scale independent angles indicating biomechanical measures such as varus and valgus are involved. Reduction path planning requires that the individual anatomy and the classification of the fracture is taken into account. In this paper, we present the basic ideas of this planning tool, it's current state, and the methodology chosen. LORENZ takes one or more conventional radiographs of the broken limb as input data. In addition, one or more x-rays of the opposite healthy bone are taken and mirrored if necessary. A most adequate CT model is being selected from a database; currently, this is achieved by using a scale space approach on the digitized x-ray images and comparing standard perspective renderings to these x-rays. After finding a CT-volume with a similar bone, a triangulated surface model is generated, and the surgeon can break the bone and arrange the fragments in 3D according to the x-ray images of the broken bone. Common osteosynthesis plates and implants can be loaded from CAD-datasets and are visualized as well. In addition, LORENZ renders virtual x-ray views of the fracture reduction process. The hybrid surface/voxel rendering engine of LORENZ also features full collision detection of fragments and implants by using the RAPID collision detection library. The reduction path is saved, and a TCP/IP interface to a robot for executing the reduction was added. LORENZ is platform independent and was programmed using Qt, AVW and OpenGL. We present a prototype for computer-aided fracture reduction planning based on standard radiographs. First test on clinical CT-Xray image pairs showed good performance; a current effort focuses on improving the speed of model retrieval by using orthonormal image moment decomposition, and on clinical evaluation for both training and surgical planning purposes. Furthermore, user-interface aspects are currently under evaluation and will be discussed.
Reduction of streamflow monitoring networks by a reference point approach
NASA Astrophysics Data System (ADS)
Cetinkaya, Cem P.; Harmancioglu, Nilgun B.
2014-05-01
Adoption of an integrated approach to water management strongly forces policy and decision-makers to focus on hydrometric monitoring systems as well. Existing hydrometric networks need to be assessed and revised against the requirements on water quantity data to support integrated management. One of the questions that a network assessment study should resolve is whether a current monitoring system can be consolidated in view of the increased expenditures in time, money and effort imposed on the monitoring activity. Within the last decade, governmental monitoring agencies in Turkey have foreseen an audit on all their basin networks in view of prevailing economic pressures. In particular, they question how they can decide whether monitoring should be continued or terminated at a particular site in a network. The presented study is initiated to address this question by examining the applicability of a method called “reference point approach” (RPA) for network assessment and reduction purposes. The main objective of the study is to develop an easily applicable and flexible network reduction methodology, focusing mainly on the assessment of the “performance” of existing streamflow monitoring networks in view of variable operational purposes. The methodology is applied to 13 hydrometric stations in the Gediz Basin, along the Aegean coast of Turkey. The results have shown that the simplicity of the method, in contrast to more complicated computational techniques, is an asset that facilitates the involvement of decision makers in application of the methodology for a more interactive assessment procedure between the monitoring agency and the network designer. The method permits ranking of hydrometric stations with regard to multiple objectives of monitoring and the desired attributes of the basin network. Another distinctive feature of the approach is that it also assists decision making in cases with limited data and metadata. These features of the RPA approach highlight its advantages over the existing network assessment and reduction methods.
Jung, Eunkyung; Joo, Nami
2013-07-01
Response surface methodology was used to investigate the effect and interactions of processing variables such as roselle extract (0.1-1.3%), soybean oil (5-20%) on physicochemical, textural and sensory properties of cooked pork patties. It was found that reduction in thickness, pH, L* and b* values decreased; however, water-holding capacity, reduction in diameter and a* values increased, respectively, as the amount of roselle increased. Soybean oil addition increased water-holding capacity, reduction in thickness, b* values of the patties. The hardness depended on the roselle and soybean oil added, as its linear effect was negative at p<0.01. The preference of color, tenderness, juiciness, and overall quality depend on the addition of roselle and soybean oil. The maximum overall quality score (5.42) was observed when 12.5 g of soybean oil and 0.7 g of roselle extract was added. The results of this optimization study would be useful for meat industry that tends to increase the product yield for patties using the optimum levels of ingredients by RSM. Copyright © 2013 Elsevier Ltd. All rights reserved.
Giménez, Estela; Sanz-Nebot, Victòria; Rizzi, Andreas
2013-09-01
Glycan reductive isotope labeling (GRIL) using [(12)C]- and [(13)C]-coded aniline was used for relative quantitation of N-glycans. In a first step, the labeling method by reductive amination was optimized for this reagent. It could be demonstrated that selecting aniline as limiting reactant and using the reductant in excess is critical for achieving high derivatization yields (over 95 %) and good reproducibility (relative standard deviations ∼1-5 % for major and ∼5-10 % for minor N-glycans). In a second step, zwitterionic-hydrophilic interaction liquid chromatography in capillary columns coupled to electrospray mass spectrometry with time-of-flight analyzer (μZIC-HILIC-ESI-TOF-MS) was applied for the analysis of labeled N-glycans released from intact glycoproteins. Ovalbumin, bovine α1-acid-glycoprotein and bovine fetuin were used as test glycoproteins to establish and evaluate the methodology. Excellent separation of isomeric N-glycans and reproducible quantitation via the extracted ion chromatograms indicate a great potential of the proposed methodology for glycoproteomic analysis and for reliable relative quantitation of glycosylation variants in biological samples.
NASA Astrophysics Data System (ADS)
Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi
2015-04-01
The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to the the river gauging station data at the German part of the Rhine river basin. This is an important finding that both 0.5-km and 20-km BTOP models produce similar flood peak discharges although the 0.5-km BTOP model results indicate the importance of scale in the local flood hazard assessment. In summary, we highlight that this study serves as a demonstrative example of institutional collaboration and is stepping stone for the next step implementation of the IFI Flagship Project.
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
2001-01-01
Given the predicted growth in air transportation, the potential exists for significant market niches for rotary wing subsonic vehicles. Technological advances which optimise rotorcraft aeromechanical behaviour can contribute significantly to both their commercial and military development, acceptance, and sales. Examples of the optimisation of rotorcraft aeromechanical behaviour which are of interest include the minimisation of vibration and/or loads. The reduction of rotorcraft vibration and loads is an important means to extend the useful life of the vehicle and to improve its ride quality. Although vibration reduction can be accomplished by using passive dampers and/or tuned masses, active closed-loop control has the potential to reduce vibration and loads throughout a.wider flight regime whilst requiring less additional weight to the aircraft man that obtained by using passive methads. It is ernphasised that the analysis described herein is applicable to all those rotorcraft aeromechanical behaviour optimisation problems for which the relationship between the harmonic control vector and the measurement vector can be adequately described by a neural-network model.
Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.
Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed
2018-03-01
Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
[Methodological deficits in neuroethics: do we need theoretical neuroethics?].
Northoff, G
2013-10-01
Current neuroethics can be characterized best as empirical neuroethics: it is strongly empirically oriented in that it not only includes empirical findings from neuroscience but also searches for applications within neuroscience. This, however, neglects the social and political contexts which could be subject to a future social neuroethics. In addition, methodological issues need to be considered as in theoretical neuroethics. The focus in this article is on two such methodological issues: (1) the analysis of the different levels and their inferences among each other which is exemplified by the inference of consciousness from the otherwise purely neuronal data in patients with vegetative state and (2) the problem of linking descriptive and normative concepts in a non-reductive and non-inferential way for which I suggest the mutual contextualization between both concepts. This results in a methodological strategy that can be described as contextual fact-norm iterativity.
Impact of entrainment on cloud droplet spectra: theory, observations, and modeling
NASA Astrophysics Data System (ADS)
Grabowski, W.
2016-12-01
Understanding the impact of entrainment and mixing on microphysical properties of warm boundary layer clouds is an important aspect of the representation of such clouds in large-scale models of weather and climate. Entrainment leads to a reduction of the liquid water content in agreement with the fundamental thermodynamics, but its impact on the droplet spectrum is difficult to quantify in observations and modeling. For in-situ (e.g., aircraft) observations, it is impossible to follow air parcels and observe processes that lead to changes of the droplet spectrum in different regions of a cloud. For similar reasons traditional modeling methodologies (e.g., the Eulerian large eddy simulation) are not useful either. Moreover, both observations and modeling can resolve only relatively narrow range of spatial scales. Theory, typically focusing on differences between idealized concepts of homogeneous and inhomogeneous mixing, is also of a limited use for the multiscale turbulent mixing between a cloud and its environment. This presentation will illustrate the above points and argue that the Lagrangian large-eddy simulation with appropriate subgrid-scale scheme may provide key insights and eventually lead to novel parameterizations for large-scale models.
Manufacturing data analytics using a virtual factory representation.
Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun
2017-01-01
Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.