Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
Software Surface Modeling and Grid Generation Steering Committee
NASA Technical Reports Server (NTRS)
Smith, Robert E. (Editor)
1992-01-01
It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
NASA Astrophysics Data System (ADS)
Das, L.; Dutta, M.; Akhter, J.; Meher, J. K.
2016-12-01
It is a challenging task to create station level (local scale) climate change information over the mountainous locations of Western Himalayan Region (WHR) in India because of limited data availability and poor data quality. In the present study, missing values of station data were handled through Multiple Imputation Chained Equation (MICE) technique. Finally 22 numbers of rain gauge and 16 number of temperature station data having continuous record during 19012005 and 19692009 period respectively were considered as reference stations for developing downscaled rainfall and temperature time series from five commonly available GCMs in the IPCC's different generation assessment reports namely 2nd, 3rd, 4th and 5th hereafter known as SAR, TAR, AR4 and AR5 respectively. Downscaled models were developed using the combined data from the ERA-interim reanalysis and GCMs historical runs (in spite of forcing were not identical in different generation) as predictor and station level rainfall and temperature as predictands. Station level downscaled rainfall and temperature time series were constructed for five GCMs available in each generation. Regional averaged downscaled time series comprising of all stations was prepared for each model and generation and the downscaled results were compared with observed time series. Finally an Overall Model Improvement Index (OMII) was developed using the downscaling results, which was used to investigate the model improvement across generations as well as the improvement of downscaling results obtained from the Empirical Statistical Downscaling (ESD) methods. In case of temperature, models have improved from SAR to AR5 over the study area. In all most all the GCMs TAR is showing worst performance over the WHR by considering the different statistical indices used in this study. In case of precipitation, no model has shown gradual improvement from SAR to AR5 both for interpolated and downscaled values.
Improvement of the model for surface process of tritium release from lithium oxide
NASA Astrophysics Data System (ADS)
Yamaki, Daiju; Iwamoto, Akira; Jitsukawa, Shiro
2000-12-01
Among the various tritium transport processes in lithium ceramics, the importance and the detailed mechanism of surface reactions remain to be elucidated. The dynamic adsorption and desorption model for tritium desorption from lithium ceramics, especially Li 2O was constructed. From the experimental results, it was considered that both H 2 and H 2O are dissociatively adsorbed on Li 2O and generate OH - on the surface. In the first model developed in 1994, it was assumed that either the dissociative adsorption of H 2 or H 2O on Li 2O generates two OH - on the surface. However, recent calculation results show that the generation of one OH - and one H - is more stable than that of two OH -s by the dissociative adsorption of H 2. Therefore, assumption of H 2 adsorption and desorption in the first model is improved and the tritium release behavior from Li 2O surface is evaluated again by using the improved model. The tritium residence time on the Li 2O surface is calculated using the improved model, and the results are compared with the experimental results. The calculation results using the improved model agree well with the experimental results than those using the first model.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
NASA Astrophysics Data System (ADS)
Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu
2017-05-01
Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.
NASA Astrophysics Data System (ADS)
Tan, Yimin; Lin, Kejian; Zu, Jean W.
2018-05-01
Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
An optimal design of coreless direct-drive axial flux permanent magnet generator for wind turbine
NASA Astrophysics Data System (ADS)
Ahmed, D.; Ahmad, A.
2013-06-01
Different types of generators are currently being used in wind power technology. The commonly used are induction generator (IG), doubly-fed induction generator (DFIG), electrically excited synchronous generator (EESG) and permanent magnet synchronous generator (PMSG). However, the use of PMSG is rapidly increasing because of advantages such as higher power density, better controllability and higher reliability. This paper presents an innovative design of a low-speed modular, direct-drive axial flux permanent magnet (AFPM) generator with coreless stator and rotor for a wind turbine power generation system that is developed using mathematical and analytical methods. This innovative design is implemented in MATLAB / Simulink environment using dynamic modelling techniques. The main focus of this research is to improve efficiency of the wind power generation system by investigating electromagnetic and structural features of AFPM generator during its operation in wind turbine. The design is validated by comparing its performance with standard models of existing wind power generators. The comparison results demonstrate that the proposed model for the wind power generator exhibits number of advantages such as improved efficiency with variable speed operation, higher energy yield, lighter weight and better wind power utilization.
Generate an Argument: An Instructional Model
ERIC Educational Resources Information Center
Sampson, Victor; Grooms, Jonathon
2010-01-01
The Generate an Argument instructional model was designed to engage students in scientific argumentation. By using this model, students develop complex reasoning and critical-thinking skills, understand the nature and development of scientific knowledge, and improve their communication skills (Duschl and Osborne 2002). This article describes the…
Model verification of large structural systems. [space shuttle model response
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1978-01-01
A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.
NASA Workshop on future directions in surface modeling and grid generation
NASA Technical Reports Server (NTRS)
Vandalsem, W. R.; Smith, R. E.; Choo, Y. K.; Birckelbaw, L. D.; Vogel, A. A.
1992-01-01
Given here is a summary of the paper sessions and panel discussions of the NASA Workshop on Future Directions in Surface Modeling and Grid Generation held a NASA Ames Research Center, Moffett Field, California, December 5-7, 1989. The purpose was to assess U.S. capabilities in surface modeling and grid generation and take steps to improve the focus and pace of these disciplines within NASA. The organization of the workshop centered around overviews from NASA centers and expert presentations from U.S. corporations and universities. Small discussion groups were held and summarized by group leaders. Brief overviews and a panel discussion by representatives from the DoD were held, and a NASA-only session concluded the meeting. In the NASA Program Planning Session summary there are five recommended steps for NASA to take to improve the development and application of surface modeling and grid generation.
You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...
2016-01-12
This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Titan I propulsion system modeling and possible performance improvements
NASA Astrophysics Data System (ADS)
Giusti, Oreste
This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.
NASA Astrophysics Data System (ADS)
Lawrence, D. M.; Fisher, R.; Koven, C.; Oleson, K. W.; Swenson, S. C.; Hoffman, F. M.; Randerson, J. T.; Collier, N.; Mu, M.
2017-12-01
The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to assess and help improve land models. The current package includes assessment of more than 25 land variables across more than 60 global, regional, and site-level (e.g., FLUXNET) datasets. ILAMB employs a broad range of metrics including RMSE, mean error, spatial distributions, interannual variability, and functional relationships. Here, we apply ILAMB for the purpose of assessment of several generations of the Community Land Model (CLM4, CLM4.5, and CLM5). Encouragingly, CLM5, which is the result of model development over the last several years by more than 50 researchers from 15 different institutions, shows broad improvements across many ILAMB metrics including LAI, GPP, vegetation carbon stocks, and the historical net ecosystem carbon balance among others. We will also show that considerable uncertainty arises from the historical climate forcing data used (GSWP3v1 and CRUNCEPv7). ILAMB score variations due to forcing data can be as large for many variables as that due to model structural differences. Strengths and weaknesses and persistent biases across model generations will also be presented.
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
NASA Astrophysics Data System (ADS)
Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe
2017-04-01
Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.
NASA Astrophysics Data System (ADS)
Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing
2014-11-01
Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.
DOT National Transportation Integrated Search
2014-08-01
Workshop Objectives: : Present Texas Trip Generation Manual : How developed : How it can be used, built upon : Provide examples and discuss : Present Generic WP Attraction Rates : Review Trip Attractions and Advanced Models
Model-driven approach to data collection and reporting for quality improvement
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J.; Majeed, Azeem; Bell, Derek
2014-01-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. PMID:24874182
Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model.
Bi, Size; Liang, Xiao; Huang, Ting-Lei
2016-01-01
Word embedding, a lexical vector representation generated via the neural linguistic model (NLM), is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.
Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators
NASA Astrophysics Data System (ADS)
Nesarajah, Marco; Frey, Georg
This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.
The Use of Ambient Humidity Conditions to Improve Influenza Forecast
NASA Astrophysics Data System (ADS)
Shaman, J. L.; Kandula, S.; Yang, W.; Karspeck, A. R.
2017-12-01
Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing. These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast and provide further evidence that humidity modulates rates of influenza transmission.
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Kaufmann, Eric; Levene, Mark; Loizou, George
Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into social phenomena. Contextual and selection effects tend to produce extreme values in the tails of rank-ordered distributions of both census data and district-level election outcomes. Models that account for this nonlinearity generally outperform linear models. Fitting nonlinear functions based on rank-ordering census and election data therefore improves the fit of aggregate voting models. This may help improve ecological inference, as well as election forecasting in majoritarian systems. We propose a generative multiplicative decrease model that gives rise to a rank-order distribution and facilitates the analysis of the recent UK EU referendum results. We supply empirical evidence that the beta-like survival function, which can be generated directly from our model, is a close fit to the referendum results, and also may have predictive value when covariate data are available.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
Model-driven approach to data collection and reporting for quality improvement.
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek
2014-12-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G
NASA Technical Reports Server (NTRS)
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1972-01-01
New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.
Singlet Delta oxygen generation for chemical oxygen-iodine lasers
NASA Astrophysics Data System (ADS)
Georges, E.; Mouthon, A.; Barraud, R.
To improve the overall efficiency of chemical oxygen-iodine lasers, it is necessary to increase the generator production and yield of singlet delta oxygen at low and high pressure, respectively, for subsonic and supersonic lasers. The water vapor content must also be as low as possible. A generator model based on gas-liquid reaction and liquid-vapor equilibrium theories is presented. From model predictions, operating conditions have been drawn to attain the following experimental results in a bubble-column: by increasing the superficial gas velocity, the production of singlet delta oxygen is largely improved at low pressure; by mixing chlorine with an inert gas before injection in the reactor, this yield is maintained constant up to higher pressure.
Positive and negative generation effects in source monitoring.
Riefer, David M; Chien, Yuchin; Reimer, Jason F
2007-10-01
Research is mixed as to whether self-generation improves memory for the source of information. We propose the hypothesis that positive generation effects (better source memory for self-generated information) occur in reality-monitoring paradigms, while negative generation effects (better source memory for externally presented information) tend to occur in external source-monitoring paradigms. This hypothesis was tested in an experiment in which participants read or generated words, followed by a memory test for the source of each word (read or generated) and the word's colour. Meiser and Bröder's (2002) multinomial model for crossed source dimensions was used to analyse the data, showing that source memory for generation (reality monitoring) was superior for the generated words, while source memory for word colour (external source monitoring) was superior for the read words. The model also revealed the influence of strong response biases in the data, demonstrating the usefulness of formal modelling when examining generation effects in source monitoring.
Improving the Horizontal Transport in the Lower Troposphere with Four Dimensional Data Assimilation
The physical processes involved in air quality modeling are governed by dynamically-generated meteorological model fields. This research focuses on reducing the uncertainty in the horizontal transport in the lower troposphere by improving the four dimensional data assimilation (F...
Next generation agricultural system data, models and knowledge products: Introduction.
Antle, John M; Jones, James W; Rosenzweig, Cynthia E
2017-07-01
Agricultural system models have become important tools to provide predictive and assessment capability to a growing array of decision-makers in the private and public sectors. Despite ongoing research and model improvements, many of the agricultural models today are direct descendants of research investments initially made 30-40 years ago, and many of the major advances in data, information and communication technology (ICT) of the past decade have not been fully exploited. The purpose of this Special Issue of Agricultural Systems is to lay the foundation for the next generation of agricultural systems data, models and knowledge products. The Special Issue is based on a "NextGen" study led by the Agricultural Model Intercomparison and Improvement Project (AgMIP) with support from the Bill and Melinda Gates Foundation.
Next Generation Agricultural System Data, Models and Knowledge Products: Introduction
NASA Technical Reports Server (NTRS)
Antle, John M.; Jones, James W.; Rosenzweig, Cynthia E.
2016-01-01
Agricultural system models have become important tools to provide predictive and assessment capability to a growing array of decision-makers in the private and public sectors. Despite ongoing research and model improvements, many of the agricultural models today are direct descendants of research investments initially made 30-40 years ago, and many of the major advances in data, information and communication technology (ICT) of the past decade have not been fully exploited. The purpose of this Special Issue of Agricultural Systems is to lay the foundation for the next generation of agricultural systems data, models and knowledge products. The Special Issue is based on a 'NextGen' study led by the Agricultural Model Intercomparison and Improvement Project (AgMIP) with support from the Bill and Melinda Gates Foundation.
NASA Astrophysics Data System (ADS)
Aksoy, Hafzullah; Dahamsheh, Ahmad
2018-07-01
For forecasting monthly precipitation in an arid region, the feed forward back-propagation, radial basis function and generalized regression artificial neural networks (ANNs) are used in this study. The ANN models are improved after incorporation of a Markov chain-based algorithm (MC-ANNs) with which the percentage of dry months is forecasted perfectly, thus generation of any non-physical negative precipitation is eliminated. Due to the fact that recorded precipitation time series are usually shorter than the length needed for a proper calibration of ANN models, synthetic monthly precipitation data are generated by Thomas-Fiering model to further improve the performance of forecasting. For case studies from Jordan, it is seen that only a slightly better performance is achieved with the use of MC and synthetic data. A conditional statement is, therefore, established and imbedded into the ANN models after the incorporation of MC and support of synthetic data, to substantially improve the ability of the models for forecasting monthly precipitation in arid regions.
NASA Technical Reports Server (NTRS)
Antle, John M.; Basso, Bruno; Conant, Richard T.; Godfray, H. Charles J.; Jones, James W.; Herrero, Mario; Howitt, Richard E.; Keating, Brian A.; Munoz-Carpena, Rafael; Rosenzweig, Cynthia
2016-01-01
This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.
Antle, John M; Basso, Bruno; Conant, Richard T; Godfray, H Charles J; Jones, James W; Herrero, Mario; Howitt, Richard E; Keating, Brian A; Munoz-Carpena, Rafael; Rosenzweig, Cynthia; Tittonell, Pablo; Wheeler, Tim R
2017-07-01
This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.
Assessment of MERRA-2 Land Surface Energy Flux Estimates
NASA Technical Reports Server (NTRS)
Draper, Clara; Reichle, Rolf; Koster, Randal
2017-01-01
In MERRA-2, observed precipitation is inserted in place of model-generated precipitation at the land surface. The use of observed precipitation was originally developed for MERRA-Land(a land-only replay of MERRA with model-generated precipitation replaced with observations).Previously shown that the land hydrology in MERRA-2 and MERRA-Land is better than MERRA. We test whether the improved land surface hydrology in MERRA-2 leads to the expected improvements in the land surface energy fluxes and 2 m air temperatures (T2m).
NASA Astrophysics Data System (ADS)
Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.
2012-04-01
The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodson, Elke L.; Brown, Maxwell; Cohen, Stuart
We study the impact of achieving technology innovation goals, representing significant technology cost reductions and performance improvements, in both the electric power and end-use sectors by comparing outputs from four energy-economic models through the year 2050. We harmonize model input assumptions and then compare results in scenarios that vary natural gas prices, technology cost and performance metrics, and the implementation of a representative national electricity sector carbon dioxide (CO 2) policy. Achieving the representative technology innovation goals decreases CO 2 emissions in all models, regardless of natural gas price, due to increased energy efficiency and low-carbon generation becoming more costmore » competitive. For the models that include domestic natural gas markets, achieving the technology innovation goals lowers wholesale electricity prices, but this effect diminishes as projected natural gas prices increase. Higher natural gas prices lead to higher wholesale electricity prices but fewer coal capacity retirements. Some of the models include energy efficiency improvements as part of achieving the high-technology goals. Absent these energy efficiency improvements, low-cost electricity facilitates greater electricity consumption. The effect of implementing a representative electricity sector CO 2 policy differs considerably depending on the cost and performance of generating and end-use technologies. The CO 2 policy influences electric sector evolution in the cases with reference technology assumptions but has little to no influence in the cases that achieve the technology innovation goals. This outcome implies that meeting the representative technology innovation goals achieves a generation mix with similar CO 2 emissions to the representative CO 2 policy but with smaller increases to wholesale electricity prices. Finally, higher natural gas prices, achieving the representative technology innovation goals, and the combination of the two, increases the amount of renewable generation that is cost-effective to build and operate while slowing the growth of natural-gas fired generation, which is the predominant generation type in 2050 under reference conditions.« less
NASA Astrophysics Data System (ADS)
Aziz, Nur Liyana Afiqah Abdul; Siah Yap, Keem; Afif Bunyamin, Muhammad
2013-06-01
This paper presents a new approach of the fault detection for improving efficiency of circulating water system (CWS) in a power generation plant using a hybrid Fuzzy Logic System (FLS) and Extreme Learning Machine (ELM) neural network. The FLS is a mathematical tool for calculating the uncertainties where precision and significance are applied in the real world. It is based on natural language which has the ability of "computing the word". The ELM is an extremely fast learning algorithm for neural network that can completed the training cycle in a very short time. By combining the FLS and ELM, new hybrid model, i.e., FLS-ELM is developed. The applicability of this proposed hybrid model is validated in fault detection in CWS which may help to improve overall efficiency of power generation plant, hence, consuming less natural recourses and producing less pollutions.
The use of ambient humidity conditions to improve influenza forecast.
Shaman, Jeffrey; Kandula, Sasikiran; Yang, Wan; Karspeck, Alicia
2017-11-01
Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance (at 1-4 lead weeks, 3.8% more peak week and 4.4% more peak intensity forecasts are accurate than with no forcing) and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing (4.4% and 2.6% respectively). These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast.
The use of ambient humidity conditions to improve influenza forecast
Kandula, Sasikiran; Karspeck, Alicia
2017-01-01
Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance (at 1–4 lead weeks, 3.8% more peak week and 4.4% more peak intensity forecasts are accurate than with no forcing) and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing (4.4% and 2.6% respectively). These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast. PMID:29145389
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
Teaching Scientific Practices: Meeting the Challenge of Change
ERIC Educational Resources Information Center
Osborne, Jonathan
2014-01-01
This paper provides a rationale for the changes advocated by the Framework for K-12 Science Education and the Next Generation Science Standards. It provides an argument for why the model embedded in the Next Generation Science Standards is seen as an improvement. The Case made here is that the underlying model that the new Framework presents of…
Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.
Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F
2017-10-01
Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM
NASA Astrophysics Data System (ADS)
Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan
2018-02-01
The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.
Freight model improvement project for ECWRPC.
DOT National Transportation Integrated Search
2011-08-01
In early 2009 WisDOT, HNTB and ECWRPC completed the first phase of the Northeast Region Travel Demand Model. : While the model includes a truck trip generation based on the quick response freight manual, the model lacks enough : truck classification ...
Model for Increasing the Power Obtained from a Thermoelectric Generator Module
NASA Astrophysics Data System (ADS)
Huang, Gia-Yeh; Hsu, Cheng-Ting; Yao, Da-Jeng
2014-06-01
We have developed a model for finding the most efficient way of increasing the power obtained from a thermoelectric generator (TEG) module with a variety of operating conditions and limitations. The model is based on both thermoelectric principles and thermal resistance circuits, because a TEG converts heat into electricity consistent with these two theories. It is essential to take into account thermal contact resistance when estimating power generation. Thermal contact resistance causes overestimation of the measured temperature difference between the hot and cold sides of a TEG in calculation of the theoretical power generated, i.e. the theoretical power is larger than the experimental power. The ratio of the experimental open-loop voltage to the measured temperature difference, the effective Seebeck coefficient, can be used to estimate the thermal contact resistance in the model. The ratio of the effective Seebeck coefficient to the theoretical Seebeck coefficient, the Seebeck coefficient ratio, represents the contact conditions. From this ratio, a relationship between performance and different variables can be developed. The measured power generated by a TEG module (TMH400302055; Wise Life Technology, Taiwan) is consistent with the result obtained by use of the model; the relative deviation is 10%. Use of this model to evaluate the most efficient means of increasing the generated power reveals that the TEG module generates 0.14 W when the temperature difference is 25°C and the Seebeck coefficient ratio is 0.4. Several methods can be used triple the amount of power generated. For example, increasing the temperature difference to 43°C generates 0.41 W power; improving the Seebeck coefficient ratio to 0.65 increases the power to 0.39 W; simultaneously increasing the temperature difference to 34°C and improving the Seebeck coefficient ratio to 0.5 increases the power to 0.41 W. Choice of the appropriate method depends on the limitations of system, the cost, and the environment.
Modeling bladder cancer in mice: opportunities and challenges
Kobayashi, Takashi; Owczarek, Tomasz B.; McKiernan, James M.; Abate-Shen, Cory
2015-01-01
The prognosis and treatment of bladder cancer have hardly improved in the last 20 years. Bladder cancer remains a debilitating and often fatal disease, and among the most costly cancers to treat. The generation of informative mouse models has the potential to improve our understanding of bladder cancer progression, as well as impact its diagnosis and treatment. However, relatively few mouse models of bladder cancer have been described and particularly few that develop invasive cancer phenotypes. This review focuses on opportunities for improving the landscape of mouse models of bladder cancer. PMID:25533675
Singlet delta oxygen generation for Chemical Oxygen-Iodine Lasers
NASA Astrophysics Data System (ADS)
Georges, E.; Mouthon, A.; Barraud, R.
1991-10-01
The development of Chemical Oxygen-Iodine Lasers is based on the generation of singlet delta oxygen. To improve the overall efficiency of these lasers, it is necessary to increase the generator production and yield of singlet delta oxygen at low and high pressure, respectively, for subsonic and supersonic lasers. Furthermore, the water vapor content must be as low as possible. A generator model, based on gas-liquid reaction and liquid-vapor equilibrium theories associated with thermophysical evaluations is presented. From model predictions, operating conditions have been drawn to attain the following experimental results in a bubble-column: by increasing the superficial gas velocity, the production of singlet delta oxygen is largely improved at low pressure; by mixing chlorine with an inert gas before injection in the reactor, this yield is maintained constant up to higher pressure. A theoretical analysis of these experimental results and their consequences for both subsonic and supersonic lasers are presented.
Improving plant bioaccumulation science through consistent reporting of experimental data.
Fantke, Peter; Arnot, Jon A; Doucette, William J
2016-10-01
Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent data collection and reporting requirements, the information generated is often less useful than it could be for direct applications in chemical assessments and for model development and refinement. We review existing testing guidelines, common data reporting practices, and provide recommendations for revising testing guidelines and reporting requirements to improve bioaccumulation knowledge and models. This analysis provides a list of experimental parameters that will help to develop high quality datasets and support modeling tools for assessing bioaccumulation of organic chemicals in plants and ultimately addressing uncertainty in ecological and human health risk assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flow Control on Low-Pressure Turbine Airfoils Using Vortex Generator Jets
NASA Technical Reports Server (NTRS)
Volino, Ralph J.; Ibrahim, Mounir B.; Kartuzova, Olga
2010-01-01
Motivation - Higher loading on Low-Pressure Turbine (LPT) airfoils: Reduce airfoil count, weight, cost. Increase efficiency, and Limited by suction side separation. Growing understanding of transition, separation, wake effects: Improved models. Take advantage of wakes. Higher lift airfoils in use. Further loading increases may require flow control: Passive: trips, dimples, etc. Active: plasma actuators, vortex generator jets (VGJs). Can increased loading offset higher losses on high lift airfoils. Objectives: Advance knowledge of boundary layer separation and transition under LPT conditions. Demonstrate, improve understanding of separation control with pulsed VGJs. Produce detailed experimental data base. Test and develop computational models.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
Link, W.A.; Barker, R.J.
2008-01-01
Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack?Jolly?Seber model and its extensions.
A Reserve-based Method for Mitigating the Impact of Renewable Energy
NASA Astrophysics Data System (ADS)
Krad, Ibrahim
The fundamental operating paradigm of today's power systems is undergoing a significant shift. This is partially motivated by the increased desire for incorporating variable renewable energy resources into generation portfolios. While these generating technologies offer clean energy at zero marginal cost, i.e. no fuel costs, they also offer unique operating challenges for system operators. Perhaps the biggest operating challenge these resources introduce is accommodating their intermittent fuel source availability. For this reason, these generators increase the system-wide variability and uncertainty. As a result, system operators are revisiting traditional operating strategies to more efficiently incorporate these generation resources to maximize the benefit they provide while minimizing the challenges they introduce. One way system operators have accounted for system variability and uncertainty is through the use of operating reserves. Operating reserves can be simplified as excess capacity kept online during real time operations to help accommodate unforeseen fluctuations in demand. With new generation resources, a new class of operating reserves has emerged that is generally known as flexibility, or ramping, reserves. This new reserve class is meant to better position systems to mitigate severe ramping in the net load profile. The best way to define this new requirement is still under investigation. Typical requirement definitions focus on the additional uncertainty introduced by variable generation and there is room for improvement regarding explicit consideration for the variability they introduce. An exogenous reserve modification method is introduced in this report that can improve system reliability with minimal impacts on total system wide production costs. Another potential solution to this problem is to formulate the problem as a stochastic programming problem. The unit commitment and economic dispatch problems are typically formulated as deterministic problems due to fast solution times and the solutions being sufficient for operations. Improvements in technical computing hardware have reignited interest in stochastic modeling. The variability of wind and solar naturally lends itself to stochastic modeling. The use of explicit reserve requirements in stochastic models is an area of interest for power system researchers. This report introduces a new reserve modification implementation based on previous results to be used in a stochastic modeling framework. With technological improvements in distributed generation technologies, microgrids are currently being researched and implemented. Microgrids are small power systems that have the ability to serve their demand with their own generation resources and may have a connection to a larger power system. As battery technologies improve, they are becoming a more viable option in these distributed power systems and research is necessary to determine the most efficient way to utilize them. This report will investigate several unique operating strategies for batteries in small power systems and analyze their benefits. These new operating strategies will help reduce operating costs and improve system reliability.
Scientific Benchmarks for Guiding Macromolecular Energy Function Improvement
Leaver-Fay, Andrew; O’Meara, Matthew J.; Tyka, Mike; Jacak, Ron; Song, Yifan; Kellogg, Elizabeth H.; Thompson, James; Davis, Ian W.; Pache, Roland A.; Lyskov, Sergey; Gray, Jeffrey J.; Kortemme, Tanja; Richardson, Jane S.; Havranek, James J.; Snoeyink, Jack; Baker, David; Kuhlman, Brian
2013-01-01
Accurate energy functions are critical to macromolecular modeling and design. We describe new tools for identifying inaccuracies in energy functions and guiding their improvement, and illustrate the application of these tools to improvement of the Rosetta energy function. The feature analysis tool identifies discrepancies between structures deposited in the PDB and low energy structures generated by Rosetta; these likely arise from inaccuracies in the energy function. The optE tool optimizes the weights on the different components of the energy function by maximizing the recapitulation of a wide range of experimental observations. We use the tools to examine three proposed modifications to the Rosetta energy function: improving the unfolded state energy model (reference energies), using bicubic spline interpolation to generate knowledge based torisonal potentials, and incorporating the recently developed Dunbrack 2010 rotamer library (Shapovalov and Dunbrack, 2011). PMID:23422428
NASA Astrophysics Data System (ADS)
Mao, Y.; Crow, W. T.; Nijssen, B.
2017-12-01
Soil moisture (SM) plays an important role in runoff generation both by partitioning infiltration and surface runoff during rainfall events and by controlling the rate of subsurface flow during inter-storm periods. Therefore, more accurate SM state estimation in hydrologic models is potentially beneficial for streamflow prediction. Various previous studies have explored the potential of assimilating SM data into hydrologic models for streamflow improvement. These studies have drawn inconsistent conclusions, ranging from significantly improved runoff via SM data assimilation (DA) to limited or degraded runoff. These studies commonly treat the whole assimilation procedure as a black box without separating the contribution of each step in the procedure, making it difficult to attribute the underlying causes of runoff improvement (or the lack thereof). In this study, we decompose the overall DA process into three steps by answering the following questions (3-step framework): 1) how much can assimilation of surface SM measurements improve surface SM state in a hydrologic model? 2) how much does surface SM improvement propagate to deeper layers? 3) How much does (surface and deeper-layer) SM improvement propagate into runoff improvement? A synthetic twin experiment is carried out in the Arkansas-Red River basin ( 600,000 km2) where a synthetic "truth" run, an open-loop run (without DA) and a DA run (where synthetic surface SM measurements are assimilated) are generated. All model runs are performed at 1/8 degree resolution and over a 10-year period using the Variable Infiltration Capacity (VIC) hydrologic model at a 3-hourly time step. For the DA run, the ensemble Kalman filter (EnKF) method is applied. The updated surface and deeper-layer SM states with DA are compared to the open-loop SM to quantitatively evaluate the first two steps in the framework. To quantify the third step, a set of perfect-state runs are generated where the "true" SM states are directly inserted in the model to assess the maximum possible runoff improvement that can be achieved by improving SM states alone. Our results show that the 3-step framework is able to effectively identify the potential as well as bottleneck of runoff improvement and point out the cases where runoff improvement via assimilation of surface SM is prone to failure.
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Frequency control of wind turbine in power system
NASA Astrophysics Data System (ADS)
Xu, Huawei
2018-06-01
In order to improve the stability of the overall frequency of the power system, automatic power generation control and secondary frequency adjustment were applied. Automatic power generation control was introduced into power generation planning. A dual-fed wind generator power regulation model suitable for secondary frequency regulation was established. The results showed that this method satisfied the basic requirements of frequency regulation control of large-scale wind power access power systems and improved the stability and reliability of power system operation. Therefore, this system frequency control method and strategy is relatively simple. The effect is significant. The system frequency can quickly reach a steady state. It is worth applying and promoting.
Principles of health economic evaluations of lipid-lowering strategies.
Ara, Roberta; Basarir, Hasan; Ward, Sue Elizabeth
2012-08-01
Policy decision-making in cardiovascular disease is increasingly informed by the results generated from decision-analytic models (DAMs). The methodological approaches and assumptions used in these DAMs impact on the results generated and can influence a policy decision based on a cost per quality-adjusted life year (QALY) threshold. Decision makers need to be provided with a clear understanding of the key sources of evidence and how they are used in the DAM to make an informed judgement on the quality and appropriateness of the results generated. Our review identified 12 studies exploring the cost-effectiveness of pharmaceutical lipid-lowering interventions published since January 2010. All studies used Markov models with annual cycles to represent the long-term clinical pathway. Important differences in the model structures and evidence base used within the DAMs were identified. Whereas the reporting standards were reasonably good, there were many instances when reporting of methods could be improved, particularly relating to baseline risk levels, long-term benefit of treatment and health state utility values. There is a scope for improvement in the reporting of evidence and modelling approaches used within DAMs to provide decision makers with a clearer understanding of the quality and validity of the results generated. This would be assisted by fuller publication of models, perhaps through detailed web appendices.
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development of 70 MW class superconducting generator with quick-response excitation
NASA Astrophysics Data System (ADS)
Miyaike, Kiyoshi; Kitajima, Toshio; Ito, Tetsuo
2002-03-01
The development of a superconducting generator had been carried out for 12 years under the first stage of a Super GM project. The 70 MW class model machine with quick response excitation was manufactured and evaluated in the project. This type of superconducting generator improves power system stability against rapid load fluctuations at the power system faults. This model machine achieved all development targets including high stability during rapid excitation control. It was also connected to the actual 77 kV electrical power grid as a synchronous condenser and proved advantages and high-operation reliability of the superconducting generator.
NASA Astrophysics Data System (ADS)
Yamagishi, Y.; Yanaka, H.; Tsuboi, S.
2009-12-01
We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Clegg, S. M.; Frydenvang, J.
2015-12-01
One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.
NASA Astrophysics Data System (ADS)
Andreadis, K.; Margulis, S. A.; Li, D.; Lettenmaier, D. P.
2017-12-01
The Surface Water and Ocean Topography (SWOT) satellite will provide critical surface water observations for the hydrologic community. However, production of key SWOT variables, such as river discharge and surface inundation, as well as lake, reservoir, and wetland storage change will be complicated by the discontinuity of the observations in space and time. A methodology that generates products with spatially and temporally continuous fields based on SWOT observables would be highly desirable. Data assimilation provides a mechanism for merging observations from SWOT with model predictions in order to produce estimates of quantities such as river discharge, storage change, and water heights for locations and times when there is no satellite overpass or other constraints (such as layover) render the measurement unusable. We describe here a prototype assimilation system with application to the Upper Mississippi basin, implemented using synthetic SWOT observations. We use a hydrologic model (VIC) coupled with a hydrodynamic model (LISFLOOD-FP) which generates "true" fields of surface water variables. The true fields are then used to generate synthetic SWOT observations using the SWOT Instrument Simulator. We also perform a "first-guess" (or open-loop) simulation with the coupled model using a configuration that contains errors representative of the imperfect knowledge of parameters and input data, including channel topography, bankfull widths and depths, and inflows, to create an ensemble of 20 model trajectories. Subsequently we assimilate the synthetic SWOT observations into the open-loop model results to estimate water surface elevation, discharge, and storage change. Our preliminary results using three data assimilation strategies show that all improve the water surface elevation estimate accuracy by 25% - 35% for a river reach of the upper Mississippi River. Ongoing work is examining whether the improved water surface elevation estimates propagate to improvements in river discharge.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
Transient analysis of a superconducting AC generator using the compensated 2-D model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, Y.D.; Lee, H.W.; Lee, J.
1999-09-01
A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less
Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook
2013-12-01
The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.
2013-02-06
This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing lowmore » to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way. Another interesting observation is that the recognized ‘overdispersion’ problem in daily rainfall simulation ascribes more to the loss of rainfall extremes than the under-representation of first-order persistence. The developed generator appears to be a sound option for daily rainfall simulation, especially in particular hydrologic planning situations when rare rainfall events are of great importance.« less
NASA Astrophysics Data System (ADS)
Das, Lalu; Meher, Jitendra K.; Akhter, Javed
2017-04-01
Assessing climate change information over the Western Himalayan Region (WHR) of India is crucial but challenging task due to its limited numbers of station data containing huge missing values. The issues of missing values of station data were replaced the Multiple Imputation Chained Equation (MICE) technique. Finally 22 numbers of rain gauge stations having continuous data during 1901-2005 and 16 numbers stations having continuous temperature data during 1969-2009 were considered as " reference stations for assessing rainfall and temperature trends in addition to evaluation of the GCMs available in the Coupled Model Intercomparison Project, Phase 3 (CMIP3) and phase 5 (CMIP5) over WRH. Station data indicates that the winter warming is higher and rapid (1.05oC) than other seasons and less warming in the post monsoon season in the last 41 years. Area averaged using 22 station data indicates that monsoon and winter rainfall has decreased by -5 mm and -320 mm during 1901-2000 while pre-monsoon and post monsoon showed an increasing trends of 21 mm and 13 mm respectively. Present study is constructed the downscaled climate change information at station locations (22 and 16 stations for rainfall and temperature respectively) over the WHR from the GCMs commonly available in the IPCC's different generations assessment reports namely 2nd, 3rd, 4th and 5th thereafter known as SAR, TAR, AR4 and AR5 respectively. Once the downscaled results are obtained for each generation model outputs, then a comparison of studies is carried out from the results of each generation. Finally an overall model improvement index (OMII) is developed using the downscaling results which is used to investigate the model improvement across generations as well as the improvement of downscaling results obtained from the empirical statistical downscaling (ESD) methods. In general, the results indicate that there is a gradual improvement of GCMs simulations as well as downscaling results across generation. Key words: MICE Techniques, CMIP3, CMIP5, ESD and OMII
USDA-ARS?s Scientific Manuscript database
In this paper we generated DNA fingerprints and end sequences from bacterial artificial chromosomes (BACs) from two new libraries to improve the first generation integrated physical and genetic map of the rainbow trout (Oncorhynchus mykiss) genome. The current version of the physical map is compose...
King, Zachary A; O'Brien, Edward J; Feist, Adam M; Palsson, Bernhard O
2017-01-01
The metabolic byproducts secreted by growing cells can be easily measured and provide a window into the state of a cell; they have been essential to the development of microbiology, cancer biology, and biotechnology. Progress in computational modeling of cells has made it possible to predict metabolic byproduct secretion with bottom-up reconstructions of metabolic networks. However, owing to a lack of data, it has not been possible to validate these predictions across a wide range of strains and conditions. Through literature mining, we were able to generate a database of Escherichia coli strains and their experimentally measured byproduct secretions. We simulated these strains in six historical genome-scale models of E. coli, and we report that the predictive power of the models has increased as they have expanded in size and scope. The latest genome-scale model of metabolism correctly predicts byproduct secretion for 35/89 (39%) of designs. The next-generation genome-scale model of metabolism and gene expression (ME-model) correctly predicts byproduct secretion for 40/89 (45%) of designs, and we show that ME-model predictions could be further improved through kinetic parameterization. We analyze the failure modes of these simulations and discuss opportunities to improve prediction of byproduct secretion. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jawak, Shridhar D.; Luis, Alvarinho J.
2016-05-01
Digital elevation model (DEM) is indispensable for analysis such as topographic feature extraction, ice sheet melting, slope stability analysis, landscape analysis and so on. Such analysis requires a highly accurate DEM. Available DEMs of Antarctic region compiled by using radar altimetry and the Antarctic digital database indicate elevation variations of up to hundreds of meters, which necessitates the generation of local improved DEM. An improved DEM of the Schirmacher Oasis, East Antarctica has been generated by synergistically fusing satellite-derived laser altimetry data from Geoscience Laser Altimetry System (GLAS), Radarsat Antarctic Mapping Project (RAMP) elevation data and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) global elevation data (GDEM). This is a characteristic attempt to generate a DEM of any part of Antarctica by fusing multiple elevation datasets, which is essential to model the ice elevation change and address the ice mass balance. We analyzed a suite of interpolation techniques for constructing a DEM from GLAS, RAMP and ASTER DEM-based point elevation datasets, in order to determine the level of confidence with which the interpolation techniques can generate a better interpolated continuous surface, and eventually improve the elevation accuracy of DEM from synergistically fused RAMP, GLAS and ASTER point elevation datasets. The DEM presented in this work has a vertical accuracy (≈ 23 m) better than RAMP DEM (≈ 57 m) and ASTER DEM (≈ 64 m) individually. The RAMP DEM and ASTER DEM elevations were corrected using differential GPS elevations as ground reference data, and the accuracy obtained after fusing multitemporal datasets is found to be improved than that of existing DEMs constructed by using RAMP or ASTER alone. This is our second attempt of fusing multitemporal, multisensory and multisource elevation data to generate a DEM of Antarctica, in order to address the ice elevation change and address the ice mass balance. Our approach focuses on the strengths of each elevation data source to produce an accurate elevation model.
Thermal Analysis of Step 2 GPHS for Next Generation Radioisotope Power Source Missions
NASA Astrophysics Data System (ADS)
Pantano, David R.; Hill, Dennis H.
2005-02-01
The Step 2 General Purpose Heat Source (GPHS) is a slightly larger and more robust version of the heritage GPHS modules flown on previous Radioisotope Thermoelectric Generator (RTG) missions like Galileo, Ulysses, and Cassini. The Step 2 GPHS is to be used in future small radioisotope power sources, such as the Stirling Radioisotope Generator (SRG110) and the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG). New features include an additional central web of Fine Weave Pierced Fabric (FWPF) graphite in the aeroshell between the two Graphite Impact Shells (GIS) to improve accidental reentry and impact survivability and an additional 0.1-inch of thickness to the aeroshell broad faces to improve ablation protection. This paper details the creation of the thermal model using Thermal Desktop and AutoCAD interfaces and provides comparisons of the model to results of previous thermal analysis models of the heritage GPHS. The results of the analysis show an anticipated decrease in total thermal gradient from the aeroshell to the iridium clads compared to the heritage results. In addition, the Step 2 thermal model is investigated under typical SRG110 boundary conditions, with cover gas and gravity environments included where applicable, to provide preliminary guidance for design of the generator. Results show that the temperatures of the components inside the GPHS remain within accepted design limits during all envisioned mission phases.
Evaluation of gravitational gradients generated by Earth's crustal structures
NASA Astrophysics Data System (ADS)
Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad
2013-02-01
Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.
Integrated Mode Choice, Small Aircraft Demand, and Airport Operations Model User's Guide
NASA Technical Reports Server (NTRS)
Yackovetsky, Robert E. (Technical Monitor); Dollyhigh, Samuel M.
2004-01-01
A mode choice model that generates on-demand air travel forecasts at a set of GA airports based on changes in economic characteristics, vehicle performance characteristics such as speed and cost, and demographic trends has been integrated with a model to generate itinerate aircraft operations by airplane category at a set of 3227 airports. Numerous intermediate outputs can be generated, such as the number of additional trips diverted from automobiles and schedule air by the improved performance and cost of on-demand air vehicles. The total number of transported passenger miles that are diverted is also available. From these results the number of new aircraft to service the increased demand can be calculated. Output from the models discussed is in the format to generate the origin and destination traffic flow between the 3227 airports based on solutions to a gravity model.
NASA Astrophysics Data System (ADS)
Bensaida, K.; Alie, Colin; Elkamel, A.; Almansoori, A.
2017-08-01
This paper presents a novel techno-economic optimization model for assessing the effectiveness of CO2 mitigation options for the electricity generation sub-sector that includes renewable energy generation. The optimization problem was formulated as a MINLP model using the GAMS modeling system. The model seeks the minimization of the power generation costs under CO2 emission constraints by dispatching power from low CO2 emission-intensity units. The model considers the detailed operation of the electricity system to effectively assess the performance of GHG mitigation strategies and integrates load balancing, carbon capture and carbon taxes as methods for reducing CO2 emissions. Two case studies are discussed to analyze the benefits and challenges of the CO2 reduction methods in the electricity system. The proposed mitigations options would not only benefit the environment, but they will as well improve the marginal cost of producing energy which represents an advantage for stakeholders.
Improved techniques for thermomechanical testing in support of deformation modeling
NASA Technical Reports Server (NTRS)
Castelli, Michael G.; Ellis, John R.
1992-01-01
The feasibility of generating precise thermomechanical deformation data to support constitutive model development was investigated. Here, the requirement is for experimental data that is free from anomalies caused by less than ideal equipment and procedures. A series of exploratory tests conducted on Hastelloy X showed that generally accepted techniques for strain controlled tests were lacking in at least three areas. Specifically, problems were encountered with specimen stability, thermal strain compensation, and temperature/mechanical strain phasing. The source of these difficulties was identified and improved thermomechanical testing techniques to correct them were developed. These goals were achieved by developing improved procedures for measuring and controlling thermal gradients and by designing a specimen specifically for thermomechanical testing. In addition, innovative control strategies were developed to correctly proportion and phase the thermal and mechanical components of strain. Subsequently, the improved techniques were used to generate deformation data for Hastelloy X over the temperature range, 200 to 1000 C.
Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang
2004-12-01
This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.
The next generation Antarctic digital magnetic anomaly map
von Frese, R.R.B; Golynsky, A.V.; Kim, H.R.; Gaya-Piqué, L.; Thébault, E.; Chiappinii, M.; Ghidella, M.; Grunow, A.; ,
2007-01-01
S (Golynsky et al., 2001). This map synthesized over 7.1 million line-kms of survey data available up through 1999 from marine, airborne and Magsat satellite observations. Since the production of the initial map, a large number of new marine and airborne surveys and improved magnetic observations from the Ørsted and CHAMP satellite missions have become available. In addition, an improved core field model for the Antarctic has been developed to better isolate crustal anomalies in these data. The next generation compilation also will likely represent the magnetic survey observations of the region in terms of a high-resolution spherical cap harmonic model. In this paper, we review the progress and problems of developing an improved magnetic anomaly map to facilitate studies of the Antarctic crustal magnetic field
Hamzah, Azhar; Thoa, Ngo Phu; Nguyen, Nguyen Hong
2017-11-01
Quantitative genetic analysis was performed on 10,919 data records collected over three generations from the selection programme for increased body weight at harvest in red tilapia (Oreochromis spp.). They were offspring of 224 sires and 226 dams (50 sires and 60 dams per generation, on average). Linear mixed models were used to analyse body traits (weight, length, width and depth), whereas threshold generalised models assuming probit distribution were employed to examine genetic inheritance of survival rate, sexual maturity and body colour. The estimates of heritability for traits studied (body weight, standard length, body width, body depth, body colour, early sexual maturation and survival) across statistical models were moderate to high (0.13-0.45). Genetic correlations among body traits and survival were high and positive (0.68-0.96). Body length and width exhibited negative genetic correlations with body colour (- 0.47 to - 0.25). Sexual maturity was genetically correlated positively with measurements of body traits (weight and length). Direct and correlated genetic responses to selection were measured as estimated breeding values in each generation and expressed in genetic standard deviation units (σ G ). The cumulative improvement achieved for harvest body weight was 1.72 σ G after three generations or 12.5% per generation when the gain was expressed as a percentage of the base population. Selection for improved body weight also resulted in correlated increase in other body traits (length, width and depth) and survival rate (ranging from 0.25 to 0.81 genetic standard deviation units). Avoidance of black spot parent matings also improved the overall red colour of the selected population. It is concluded that the selective breeding programme for red tilapia has succeeded in achieving significant genetic improvement for a range of commercially important traits in this species, and the large genetic variation in body colour and survival also shows that there are prospects for future improvement of these traits in this population of red tilapia.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Contact-assisted protein structure modeling by global optimization in CASP11.
Joo, Keehyoung; Joung, InSuk; Cheng, Qianyi; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
We have applied the conformational space annealing method to the contact-assisted protein structure modeling in CASP11. For Tp targets, where predicted residue-residue contact information was provided, the contact energy term in the form of the Lorentzian function was implemented together with the physical energy terms used in our template-free modeling of proteins. Although we observed some structural improvement of Tp models over the models predicted without the Tp information, the improvement was not substantial on average. This is partly due to the inaccuracy of the provided contact information, where only about 18% of it was correct. For Ts targets, where the information of ambiguous NOE (Nuclear Overhauser Effect) restraints was provided, we formulated the modeling in terms of the two-tier optimization problem, which covers: (1) the assignment of NOE peaks and (2) the three-dimensional (3D) model generation based on the assigned NOEs. Although solving the problem in a direct manner appears to be intractable at first glance, we demonstrate through CASP11 that remarkably accurate protein 3D modeling is possible by brute force optimization of a relevant energy function. For 19 Ts targets of the average size of 224 residues, generated protein models were of about 3.6 Å Cα atom accuracy. Even greater structural improvement was observed when additional Tc contact information was provided. For 20 out of the total 24 Tc targets, we were able to generate protein structures which were better than the best model from the rest of the CASP11 groups in terms of GDT-TS. Proteins 2016; 84(Suppl 1):189-199. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Field-circuit analysis and measurements of a single-phase self-excited induction generator
NASA Astrophysics Data System (ADS)
Makowski, Krzysztof; Leicht, Aleksander
2017-12-01
The paper deals with a single-phase induction machine operating as a stand-alone self-excited single-phase induction generator for generation of electrical energy from renewable energy sources. By changing number of turns and size of wires in the auxiliary stator winding, an improvement of performance characteristics of the generator were obtained as regards no-load and load voltage of the stator windings as well as stator winding currents of the generator. Field-circuit simulation models of the generator were developed using Flux2D software package for the generator with shunt capacitor in the main stator winding. The obtained results have been validated experimentally at the laboratory setup using the single-phase capacitor induction motor of 1.1 kW rated power and 230 V voltage as a base model of the generator.
Aeroacoustic Improvements to Fluidic Chevron Nozzles
NASA Technical Reports Server (NTRS)
Henderson, Brenda; Kinzie, Kevin; Whitmire, Julia; Abeysinghe, Amal
2006-01-01
Fluidic chevrons use injected air near the trailing edge of a nozzle to emulate mixing and jet noise reduction characteristics of mechanical chevrons. While previous investigations of "first generation" fluidic chevron nozzles showed only marginal improvements in effective perceived noise levels when compared to nozzles without injection, significant improvements in noise reduction characteristics were achieved through redesigned "second generation" nozzles on a bypass ratio 5 model system. The second-generation core nozzles had improved injection passage contours, external nozzle contour lines, and nozzle trailing edges. The new fluidic chevrons resulted in reduced overall sound pressure levels over that of the baseline nozzle for all observation angles. Injection ports with steep injection angles produced lower overall sound pressure levels than those produced by shallow injection angles. The reductions in overall sound pressure levels were the result of noise reductions at low frequencies. In contrast to the first-generation nozzles, only marginal increases in high frequency noise over that of the baseline nozzle were observed for the second-generation nozzles. The effective perceived noise levels of the new fluidic chevrons are shown to approach those of the core mechanical chevrons.
Gait Planning and Stability Control of a Quadruped Robot
Li, Junmin; Wang, Jinge; Yang, Simon X.; Zhou, Kedong; Tang, Huijuan
2016-01-01
In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype. PMID:27143959
Gait Planning and Stability Control of a Quadruped Robot.
Li, Junmin; Wang, Jinge; Yang, Simon X; Zhou, Kedong; Tang, Huijuan
2016-01-01
In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype.
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madronich, Sasha
2015-12-09
The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.
An experimental study of factors affecting the selective inhibition of sintering process
NASA Astrophysics Data System (ADS)
Asiabanpour, Bahram
Selective Inhibition of Sintering (SIS) is a new rapid prototyping method that builds parts in a layer-by-layer fabrication basis. SIS works by joining powder particles through sintering in the part's body, and by sintering inhibition of some selected powder areas. The objective of this research has been to improve the new SIS process, which has been invented at USC. The process improvement is based on statistical design of experiments. To conduct the needed experiments a working machine and related path generator software were needed. The machine and its control software were made available prior to this research. The path generator algorithms and software had to be created. This program should obtain model geometry data from a CAD file and generate an appropriate path file for the printer nozzle. Also, the program should generate a simulation file for path file inspection using virtual prototyping. The activities related to path generator constitute the first part of this research, which has resulted in an efficient path generator. In addition, to reach an acceptable level of accuracy, strength, and surface quality in the fabricated parts, all effective factors in the SIS process should be identified and controlled. Simultaneous analytical and experimental studies were conducted to recognize effective factors and to control the SIS process. Also, it was known that polystyrene was the most appropriate polymer powder and saturated potassium iodide was the most effective inhibitor among the available candidate materials. In addition, statistical tools were applied to improve the desirable properties of the parts fabricated by the SIS process. An investigation of part strength was conducted using the Response Surface Methodology (RSM) and a region of acceptable operating conditions for the part strength was found. Then, through analysis of the experimental results, the impact of the factors on the final part surface quality and dimensional accuracy was modeled. After developing a desirability function model, process operating conditions for maximum desirability were identified. Finally, the desirability model was validated.
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco
2005-03-01
GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.
A Statistical Multimodel Ensemble Approach to Improving Long-Range Forecasting in Pakistan
2012-03-01
Impact of global warming on monsoon variability in Pakistan. J. Anim. Pl. Sci., 21, no. 1, 107–110. Gillies, S., T. Murphree, and D. Meyer, 2012...are generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The...generated by multiple regression models that relate globally distributed oceanic and atmospheric predictors to local predictands. The predictands are
Improved trip generation data for Texas using work place and special generator survey data.
DOT National Transportation Integrated Search
2015-05-01
Travel estimates from models and manuals developed from trip attraction rates having high variances due to few : survey observations can reduce confidence and accuracy in estimates. This project compiled and analyzed data from : more than a decade of...
NUCFRG2: An evaluation of the semiempirical nuclear fragmentation database
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Cucinotta, F. A.; Shinn, J. L.; Badavi, F. F.; Chun, S. Y.; Norbury, J. W.; Zeitlin, C. J.; Heilbronn, L.; Miller, J.
1995-01-01
A semiempirical abrasion-ablation model has been successful in generating a large nuclear database for the study of high charge and energy (HZE) ion beams, radiation physics, and galactic cosmic ray shielding. The cross sections that are generated are compared with measured HZE fragmentation data from various experimental groups. A research program for improvement of the database generator is also discussed.
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N
2017-12-01
Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.
NGA West 2 | Pacific Earthquake Engineering Research Center
, multi-year research program to improve Next Generation Attenuation models for active tectonic regions earthquake engineering, including modeling of directivity and directionality; verification of NGA-West models epistemic uncertainty; and evaluation of soil amplification factors in NGA models versus NEHRP site factors
Future requirements in surface modeling and grid generation
NASA Technical Reports Server (NTRS)
Cosner, Raymond R.
1995-01-01
The past ten years have seen steady progress in surface modeling procedures, and wholesale changes in grid generation technology. Today, it seems fair to state that a satisfactory grid can be developed to model nearly any configuration of interest. The issues at present focus on operational concerns such as cost and quality. Continuing evolution of the engineering process is placing new demands on the technologies of surface modeling and grid generation. In the evolution toward a multidisciplinary analysis-bascd design environment, methods developed for Computational Fluid Dynamics are finding acceptance in many additional applications. These two trends, the normal evolution of the process and a watershed shift toward concurrent and multidisciplinary analysis, will be considered in assessing current capabilities and needed technological improvements.
Vortex Generators in a Two-Dimensional, External-Compression Supersonic Inlet
NASA Technical Reports Server (NTRS)
Baydar, Ezgihan; Lu, Frank K.; Slater, John W.
2016-01-01
Vortex generators within a two-dimensional, external-compression supersonic inlet for Mach 1.6 were investigated to determine their ability to increase total pressure recovery, reduce total pressure distortion, and improve the boundary layer. The vortex generators studied included vanes and ramps. The geometric factors of the vortex generators studied included height, length, spacing, and positions upstream and downstream of the inlet terminal shock. The flow through the inlet was simulated through the computational solution of the steady-state Reynolds-averaged Navier-Stokes equations on multi-block, structured grids. The vortex generators were simulated by either gridding the geometry of the vortex generators or modeling the vortices generated by the vortex generators. The inlet performance was characterized by the inlet total pressure recovery, total pressure distortion, and incompressible shape factor of the boundary-layer at the engine face. The results suggested that downstream vanes reduced the distortion and improved the boundary layer. The height of the vortex generators had the greatest effect of the geometric factors.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.
Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform
Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
3D molecular models of whole HIV-1 virions generated with cellPACK
Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262
Min, Yul Ha; Park, Hyeoun-Ae; Lee, Joo Yun; Jo, Soo Jung; Jeon, Eunjoo; Byeon, Namsoo; Choi, Seung Yong; Chung, Eunja
2014-01-01
The aim of this study is to develop and evaluate a natural language generation system to populate nursing narratives using detailed clinical models. Semantic, contextual, and syntactical knowledges were extracted. A natural language generation system linking these knowledges was developed. The quality of generated nursing narratives was evaluated by the three nurse experts using a five-point rating scale. With 82 detailed clinical models, in total 66,888 nursing narratives in four different types of statement were generated. The mean scores for overall quality was 4.66, for content 4.60, for grammaticality 4.40, for writing style 4.13, and for correctness 4.60. The system developed in this study generated nursing narratives with different levels of granularity. The generated nursing narratives can improve semantic interoperability of nursing data documented in nursing records.
Modeling of urban solid waste management system: The case of Dhaka city
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sufian, M.A.; Bala, B.K.
2007-07-01
This paper presents a system dynamics computer model to predict solid waste generation, collection capacity and electricity generation from solid waste and to assess the needs for waste management of the urban city of Dhaka, Bangladesh. Simulated results show that solid waste generation, collection capacity and electricity generation potential from solid waste increase with time. Population, uncleared waste, untreated waste, composite index and public concern are projected to increase with time for Dhaka city. Simulated results also show that increasing the budget for collection capacity alone does not improve environmental quality; rather an increased budget is required for both collectionmore » and treatment of solid wastes of Dhaka city. Finally, this model can be used as a computer laboratory for urban solid waste management (USWM) policy analysis.« less
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice
2014-03-01
In this study, a correlation-based imaging technique called "Excitelet" is used to monitor an aerospace grade aluminum plate, representative of an aircraft component. The principle is based on ultrasonic guided wave generation and sensing using three piezoceramic (PZT) transducers, and measurement of reflections induced by potential defects. The method uses a propagation model to correlate measured signals with a bank of signals and imaging is performed using a roundrobin procedure (Full-Matrix Capture). The formulation compares two models for the complex transducer dynamics: one where the shear stress at the tip of the PZT is considered to vary as a function of the frequency generated, and one where the PZT is discretized in order to consider the shear distribution under the PZT. This method allows taking into account the transducer dynamics and finite dimensions, multi-modal and dispersive characteristics of the material and complex interactions between guided wave and damages. Experimental validation has been conducted on an aerospace grade aluminum joint instrumented with three circular PZTs of 10 mm diameter. A magnet, acting as a reflector, is used in order to simulate a local reflection in the structure. It is demonstrated that the defect can be accurately detected and localized. The two models proposed are compared to the classical pin-force model, using narrow and broad-band excitations. The results demonstrate the potential of the proposed imaging techniques for damage monitoring of aerospace structures considering improved models for guided wave generation and propagation.
From Wake Steering to Flow Control
Fleming, Paul A.; Annoni, Jennifer; Churchfield, Matthew J.; ...
2017-11-22
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Sensor-Based Optimization Model for Air Quality Improvement in Home IoT
Kim, Jonghyuk
2018-01-01
We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market. PMID:29570684
Sensor-Based Optimization Model for Air Quality Improvement in Home IoT.
Kim, Jonghyuk; Hwangbo, Hyunwoo
2018-03-23
We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market.
Experimental and numerical investigation of hydro power generator ventilation
NASA Astrophysics Data System (ADS)
Jamshidi, H.; Nilsson, H.; Chernoray, V.
2014-03-01
Improvements in ventilation and cooling offer means to run hydro power generators at higher power output and at varying operating conditions. The electromagnetic, frictional and windage losses generate heat. The heat is removed by an air flow that is driven by fans and/or the rotor itself. The air flow goes through ventilation channels in the stator, to limit the electrical insulation temperatures. The temperature should be kept limited and uniform in both time and space, avoiding thermal stresses and hot-spots. For that purpose it is important that the flow of cooling air is distributed uniformly, and that flow separation and recirculation are minimized. Improvements of the air flow properties also lead to an improvement of the overall efficiency of the machine. A significant part of the windage losses occurs at the entrance of the stator ventilation channels, where the air flow turns abruptly from tangential to radial. The present work focuses exclusively on the air flow inside a generator model, and in particular on the flow inside the stator channels. The generator model design of the present work is based on a real generator that was previously studied. The model is manufactured taking into consideration the needs of both the experimental and numerical methodologies. Computational Fluid Dynamics (CFD) results have been used in the process of designing the experimental setup. The rotor and stator are manufactured using rapid-prototyping and plexi-glass, yielding a high geometrical accuracy, and optical experimental access. A special inlet section is designed for accurate air flow rate and inlet velocity profile measurements. The experimental measurements include Particle Image Velocimetry (PIV) and total pressure measurements inside the generator. The CFD simulations are performed based on the OpenFOAM CFD toolbox, and the steady-state frozen rotor approach. Specific studies are performed, on the effect of adding "pick-up" to spacers, and the effects of the inlet fan blades on the flow rate through the model. The CFD results capture the experimental flow details to a reasonable level of accuracy.
Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.
2015-01-01
Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858
Integrating Unified Gravity Wave Physics into the NOAA Next Generation Global Prediction System
NASA Astrophysics Data System (ADS)
Alpert, J. C.; Yudin, V.; Fuller-Rowell, T. J.; Akmaev, R. A.
2017-12-01
The Unified Gravity Wave Physics (UGWP) project for the Next Generation Global Prediction System (NGGPS) is a NOAA collaborative effort between the National Centers for Environmental Prediction (NCEP), Environemntal Modeling Center (EMC) and the University of Colorado, Cooperative Institute for Research in Environmental Sciences (CU-CIRES) to support upgrades and improvements of GW dynamics (resolved scales) and physics (sub-grid scales) in the NOAA Environmental Modeling System (NEMS)†. As envisioned the global climate, weather and space weather models of NEMS will substantially improve their predictions and forecasts with the resolution-sensitive (scale-aware) formulations planned under the UGWP framework for both orographic and non-stationary waves. In particular, the planned improvements for the Global Forecast System (GFS) model of NEMS are: calibration of model physics for higher vertical and horizontal resolution and an extended vertical range of simulations, upgrades to GW schemes, including the turbulent heating and eddy mixing due to wave dissipation and breaking, and representation of the internally-generated QBO. The main priority of the UGWP project is unified parameterization of orographic and non-orographic GW effects including momentum deposition in the middle atmosphere and turbulent heating and eddies due to wave dissipation and breaking. The latter effects are not currently represented in NOAA atmosphere models. The team has tested and evaluated four candidate GW solvers integrating the selected GW schemes into the NGGPS model. Our current work and planned activity is to implement the UGWP schemes in the first available GFS/FV3 (open FV3) configuration including adapted GFDL modification for sub-grid orography in GFS. Initial global model results will be shown for the operational and research GFS configuration for spectral and FV3 dynamical cores. †http://www.emc.ncep.noaa.gov/index.php?branch=NEMS
Inference of genetic network of Xenopus frog egg: improved genetic algorithm.
Wu, Shinq-Jen; Chou, Chia-Hsien; Wu, Cheng-Tao; Lee, Tsu-Tian
2006-01-01
An improved genetic algorithm (IGA) is proposed to achieve S-system gene network modeling of Xenopus frog egg. Via the time-courses training datasets from Michaelis-Menten model, the optimal parameters are learned. The S-system can clearly describe activative and inhibitory interaction between genes as generating and consuming process. We concern the mitotic control in cell-cycle of Xenopus frog egg to realize cyclin-Cdc2 and Cdc25 for MPF activity. The proposed IGA can achieve global search with migration and keep the best chromosome with elitism operation. The generated gene regulatory networks can provide biological researchers for further experiments in Xenopus frog egg cell cycle control.
Influence Of Momentum Excess On The Pattern And Dynamics Of Intermediate-Range Stratified Wakes
2016-06-01
excess in order to model the fundamental differences between signatures generated by towed and self- propelled bodies in various ocean states. In cases...which can be used on the operational level for developing and improving algorithms for non- acoustic signature prediction and detection. 14. SUBJECT...order to model the fundamental differences between signatures generated by towed and self- propelled bodies in various ocean states. In cases where
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan
2015-02-01
Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.
TOWARDS AN IMPROVED UNDERSTANDING OF SIMULATED AND OBSERVED CHANGES IN EXTREME PRECIPITATION
The evaluation of climate model precipitation is expected to reveal biases in simulated mean and extreme precipitation which may be a result of coarse model resolution or inefficiencies in the precipitation generating mechanisms in models. The analysis of future extreme precip...
NASA Astrophysics Data System (ADS)
Mwakabuta, Ndaga Stanslaus
Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.
Next Generation Electromagnetic Pump Analysis Tools (PLM DOC-0005-2188). Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stregy, Seth; Dasilva, Ana; Yilmaz, Serkan
2015-10-29
This report provides the broad historical review of EM Pump development and details of MATRIX development under this project. This report summarizes the efforts made to modernize the legacy performance models used in previous EM Pump designs and the improvements made to the analysis tools. This report provides information on Tasks 1, 3, and 4 of the entire project. The research for Task 4 builds upon Task 1: Update EM Pump Databank and Task 3: Modernize the Existing EM Pump Analysis Model, which are summarized within this report. Where research for Task 2: Insulation Materials Development and Evaluation identified parametersmore » applicable to the analysis model with Task 4, the analysis code was updated, and analyses were made for additional materials. The important design variables for the manufacture and operation of an EM Pump that the model improvement can evaluate are: space constraints; voltage capability of insulation system; maximum flux density through iron; flow rate and outlet pressure; efficiency and manufacturability. The development of the next-generation EM Pump analysis tools during this two-year program provides information in three broad areas: Status of analysis model development; Improvements made to older simulations; and Comparison to experimental data.« less
Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources
NASA Astrophysics Data System (ADS)
Novakovskaia, E.; Hayes, C.; Collier, C.
2014-12-01
The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.
Wu, Yiping; Chen, Ji
2013-01-01
The ever-increasing demand for water due to growth of population and socioeconomic development in the past several decades has posed a worldwide threat to water supply security and to the environmental health of rivers. This study aims to derive reservoir operating rules through establishing a multi-objective optimization model for the Xinfengjiang (XFJ) reservoir in the East River Basin in southern China to minimize water supply deficit and maximize hydropower generation. Additionally, to enhance the estimation of irrigation water demand from the downstream agricultural area of the XFJ reservoir, a conventional method for calculating crop water demand is improved using hydrological model simulation results. Although the optimal reservoir operating rules are derived for the XFJ reservoir with three priority scenarios (water supply only, hydropower generation only, and equal priority), the river environmental health is set as the basic demand no matter which scenario is adopted. The results show that the new rules derived under the three scenarios can improve the reservoir operation for both water supply and hydropower generation when comparing to the historical performance. Moreover, these alternative reservoir operating policies provide the flexibility for the reservoir authority to choose the most appropriate one. Although changing the current operating rules may influence its hydropower-oriented functions, the new rules can be significant to cope with the increasingly prominent water shortage and degradation in the aquatic environment. Overall, our results and methods (improved estimation of irrigation water demand and formulation of the reservoir optimization model) can be useful for local watershed managers and valuable for other researchers worldwide.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button
2010-01-01
Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.
Use of Temperature to Improve West Nile Virus Forecasts
NASA Astrophysics Data System (ADS)
Shaman, J. L.; DeFelice, N.; Schneider, Z.; Little, E.; Barker, C.; Caillouet, K.; Campbell, S.; Damian, D.; Irwin, P.; Jones, H.; Townsend, J.
2017-12-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether the inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that were on average 5%, 10%, 12%, and 6% more accurate, respectively, than the baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperatures influence rates of WNV transmission. The findings help build a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs.
Study on DFIG wind turbines control strategy for improving frequency response characteristics
NASA Astrophysics Data System (ADS)
Zhao, Dongmei; Wu, Di; Liu, Yanhua; Zhou, Zhiyu
2012-01-01
The active and reactive power decoupling control for the double-fed induction generator wind turbines(DFIG) does not play a positive role to the frequency response ability of power grid because it performs as the hidden inertia for the power grid. If we want to improve the transient frequency stability of the wind turbine when it is integrated with the system, we must ameliorate its frequency response characteristics. The inability of frequency control due to DFIG decoupling control could be overcome through releasing (or absorbing) a part of the kinetic energy stored in the rotor, so as to increase (or decrease) active power injected to the power system when the deviation of power system frequency appears. This paper discusses the mathematical model of the variable speed DFIG, including the aerodynamic model, pitch control system model, shaft model, generator model and inverter control model, and other key components, focusing on the mathematical model of the converters in rotor side and grid side. Based on the existing model of wind generator, the paper attaches the frequency control model on the platform of the simulation software DIgSILENT/PowerFactory. The simulation results show that the proposed control strategy can response quickly to transient frequency deviation and prove that wind farms can participate in the system frequency regulation to a certain extent. Finally, the result verifies the accuracy and plausibility of the inverter control model which attaches the frequency control module.
Study on DFIG wind turbines control strategy for improving frequency response characteristics
NASA Astrophysics Data System (ADS)
Zhao, Dongmei; Wu, Di; Liu, Yanhua; Zhou, Zhiyu
2011-12-01
The active and reactive power decoupling control for the double-fed induction generator wind turbines(DFIG) does not play a positive role to the frequency response ability of power grid because it performs as the hidden inertia for the power grid. If we want to improve the transient frequency stability of the wind turbine when it is integrated with the system, we must ameliorate its frequency response characteristics. The inability of frequency control due to DFIG decoupling control could be overcome through releasing (or absorbing) a part of the kinetic energy stored in the rotor, so as to increase (or decrease) active power injected to the power system when the deviation of power system frequency appears. This paper discusses the mathematical model of the variable speed DFIG, including the aerodynamic model, pitch control system model, shaft model, generator model and inverter control model, and other key components, focusing on the mathematical model of the converters in rotor side and grid side. Based on the existing model of wind generator, the paper attaches the frequency control model on the platform of the simulation software DIgSILENT/PowerFactory. The simulation results show that the proposed control strategy can response quickly to transient frequency deviation and prove that wind farms can participate in the system frequency regulation to a certain extent. Finally, the result verifies the accuracy and plausibility of the inverter control model which attaches the frequency control module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry
2013-05-01
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
Experience With Bayesian Image Based Surface Modeling
NASA Technical Reports Server (NTRS)
Stutz, John C.
2005-01-01
Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
The community multiscale air quality (CMAQ) model of the U.S. Environmental Protection Agency is one of the most widely used air quality model worldwide; it is employed for both research and regulatory applications at major universities and government agencies for improving under...
Learning Molecular Behaviour May Improve Student Explanatory Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Harris, Sara E.; Gold, Anne U.
2018-01-01
We assessed undergraduates' representations of the greenhouse effect, based on student-generated concept sketches, before and after a 30-min constructivist lesson. Principal component analysis of features in student sketches revealed seven distinct and coherent explanatory models including a new "Molecular Details" model. After the…
2015-08-27
and 2) preparing for the post-MODIS/MISR era using the Geostationary Operational Environmental Satellite (GOES). 3. Improve model representations of...meteorological property retrievals. In this study, using collocated data from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geostationary
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
New generation humanized mice for virus research: Comparative aspects and future prospects
Akkina, Ramesh
2014-01-01
Work with human specific viruses will greatly benefit from the use of an in vivo system that provides human target cells and tissues in a physiological setting. In this regard humanized mice (hu-Mice) have played an important role in our understanding of viral pathogenesis and testing of therapeutic strategies. Limitations with earlier versions of hu-Mice that lacked a functioning human immune system are currently being overcome. The new generation hu-Mouse models are capable of multilineage human hematopoiesis and generate T cells, B cells, macrophages and dendritic cells required for an adaptive human immune response. Now any human specific pathogen that can infect humanized mice can be studied in the context of ongoing infection and immune responses. Two leading humanized mouse models are currently employed: the hu-HSC model is created by transplantation of human hematopoietic stem cells (HSC), whereas the BLT mouse model is prepared by transplantation of human fetal liver, thymus and HSC. A number of human specific viruses such as HIV-1, dengue, EBV and HCV are being studied intensively in these systems. Both models permit infection by mucosal routes with viruses such as HIV-1 thus allowing transmission prevention studies. Cellular and humoral immune responses are seen in both the models. While there is efficient antigen specific IgM production, IgG responses are suboptimal due to inefficient immunoglobulin class switching. With the maturation of T cells occurring in the autologous human thymus, BLT mice permit human HLA restricted T cell responses in contrast to hu-HSC mice. However, the strength of the immune responses needs further improvement in both models to reach the levels seen in humans. The scope of hu-Mice use is further broadened by transplantation of additional tissues like human liver thus permitting immunopathogenesis studies on hepatotropic viruses such as HCV. Numerous studies that encompass antivirals, gene therapy, viral evolution, and the generation of human monoclonal antibodies have been conducted with promising results in these mice. For further improvement of the new hu-Mouse models, ongoing work is focused on generating new strains of immunodeficient mice transgenic for human HLA molecules to strengthen immune responses and human cytokines and growth factors to improve human cell reconstitution and their homeostatic maintenance. PMID:23217612
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors
2010-09-01
actuator geometry, and rib-to-facesheet intersection geometry are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A...are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A parametric finite element model is used to explore the trade space...MOST) finite element model. The move to lightweight actively-controlled silicon carbide ( SiC ) mirrors is traced back to previous generations of space
Distributed Generation Market Demand Model (dGen): Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigrin, Benjamin; Gleason, Michael; Preus, Robert
The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can modelmore » various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.« less
Generation of monochloropropanediols (MCPDs) in model dough systems. 1. Leavened doughs.
Hamlet, Colin G; Sadd, Peter A; Gray, David A
2004-04-07
The effect of dough recipe ingredients and processing on the generation of monochloropropanediol isomers (MCPDs) in leavened wheat doughs has been investigated. Commercial ingredients having no effect on MCPD formation were acetic acid and baking fats (triacylglycerols). Ingredients making a significant contribution to MCPD levels were yeast and flour improver [ascorbic acid, diacetyl tartaric acid esters of mono- and diglycerides (DATEM), and soya flour]. The results showed that free glycerol is a key precursor of MCPDs in leavened doughs. This glycerol is primarily generated by the yeast during proving but is also present in the flour, the yeast, and the improver. Under conditions of high dough moisture content (45%), MCPD formation was approximately proportional to glycerol concentration but showed a weaker dependence on chloride level, suggesting that the mechanisms of formation involved at least some reversible stages. MCPD generation increased with decreasing dough moisture to a point where the formation reaction was limited by chloride solubility and competing reactions involving glycerol and key precursor intermediates. These results could be predicted by a kinetic model derived from the experimental data. Glycerol was shown to account for 68% of MCPDs generated in proved full recipe dough.
Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Llames, Rene Lim
1991-01-01
Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-09-01
This document presents a modeling and control study of the Fluid Bed Gasification (FBG) unit at the Morgantown Energy Technology Center (METC). The work is performed under contract no. DE-FG21-94MC31384. The purpose of this study is to generate a simple FBG model from process data, and then use the model to suggest an improved control scheme which will improve operation of the gasifier. The work first developes a simple linear model of the gasifier, then suggests an improved gasifier pressure and MGCR control configuration, and finally suggests the use of a multivariable control strategy for the gasifier.
Astashkina, Anna; Grainger, David W
2014-04-01
Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
2008-09-30
retrievals, Geophysical Research Abstracts, Vol. 10, EGU2008-A-11193, 2008, SRef-ID: 1607-7962/gra/EGU2008-A 11193, EGU General Assembly 2008. Liu, M...Application of Earth Sciences Products for use in Next Generation Numerical Aerosol...can be generated and predicted. Through this system, we will be able to advance a number of US Navy Applied Science needs in the areas of improved
Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany
We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less
Sociality influences cultural complexity.
Muthukrishna, Michael; Shulman, Ben W; Vasilescu, Vlad; Henrich, Joseph
2014-01-07
Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution.
Sociality influences cultural complexity
Muthukrishna, Michael; Shulman, Ben W.; Vasilescu, Vlad; Henrich, Joseph
2014-01-01
Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution. PMID:24225461
Generative Teaching and Learning of Economic Concepts: A Sample Lesson.
ERIC Educational Resources Information Center
Laney, James D.
1990-01-01
Presents a scripted lesson plan for intermediate grades, based on M.C. Wittrock's model of generative teaching derived from brain lateralization research. Uses a shopping mall as the setting for hypothetical dilemmas. Offers a combination of verbal and imagined strategies that improve students' economic reasoning and teaches cost-benefit analysis.…
Amazon forest structure generates diurnal and seasonal variability in light utilization
Douglas C. Morton; Jeremy Rubio; Bruce D. Cook; Jean-Philippe Gastellu-Etchegorry; Marcos Longo; Hyeungu Choi; Maria Hunter; Michael Keller
2016-01-01
The complex three-dimensional (3-D) structure of tropical forests generates a diversity of light environments for canopy and understory trees. Understanding diurnal and seasonal changes in light availability is critical for interpreting measurements of net ecosystem exchange and improving ecosystem models. Here, we used the Discrete Anisotropic Radiative Transfer (DART...
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy
ERIC Educational Resources Information Center
Fukaya, Tatsushi
2013-01-01
The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…
Polinski, Nicole K.; Volpicelli-Daley, Laura A.; Sortwell, Caryl E.; Luk, Kelvin C.; Cremades, Nunilo; Gottler, Lindsey M.; Froula, Jessica; Duffy, Megan F.; Lee, Virginia M.Y.; Martinez, Terina N.; Dave, Kuldip D.
2018-01-01
Parkinson’s disease (PD) is the second most common neurodegenerative disease, affecting approximately one-percent of the population over the age of sixty. Although many animal models have been developed to study this disease, each model presents its own advantages and caveats. A unique model has arisen to study the role of alpha-synuclein (aSyn) in the pathogenesis of PD. This model involves the conversion of recombinant monomeric aSyn protein to a fibrillar form—the aSyn pre-formed fibril (aSyn PFF)—which is then injected into the brain or introduced to the media in culture. Although many groups have successfully adopted and replicated the aSyn PFF model, issues with generating consistent pathology have been reported by investigators. To improve the replicability of this model and diminish these issues, The Michael J. Fox Foundation for Parkinson’s Research (MJFF) has enlisted the help of field leaders who performed key experiments to establish the aSyn PFF model to provide the research community with guidelines and practical tips for improving the robustness and success of this model. Specifically, we identify key pitfalls and suggestions for avoiding these mistakes as they relate to generating the aSyn PFFs from monomeric protein, validating the formation of pathogenic aSyn PFFs, and using the aSyn PFFs in vivo or in vitro to model PD. With this additional information, adoption and use of the aSyn PFF model should present fewer challenges, resulting in a robust and widely available model of PD. PMID:29400668
An epidemiological modeling and data integration framework.
Pfeifer, B; Wurz, M; Hanser, F; Seger, M; Netzer, M; Osl, M; Modre-Osprian, R; Schreier, G; Baumgartner, C
2010-01-01
In this work, a cellular automaton software package for simulating different infectious diseases, storing the simulation results in a data warehouse system and analyzing the obtained results to generate prediction models as well as contingency plans, is proposed. The Brisbane H3N2 flu virus, which has been spreading during the winter season 2009, was used for simulation in the federal state of Tyrol, Austria. The simulation-modeling framework consists of an underlying cellular automaton. The cellular automaton model is parameterized by known disease parameters and geographical as well as demographical conditions are included for simulating the spreading. The data generated by simulation are stored in the back room of the data warehouse using the Talend Open Studio software package, and subsequent statistical and data mining tasks are performed using the tool, termed Knowledge Discovery in Database Designer (KD3). The obtained simulation results were used for generating prediction models for all nine federal states of Austria. The proposed framework provides a powerful and easy to handle interface for parameterizing and simulating different infectious diseases in order to generate prediction models and improve contingency plans for future events.
Computational Model of Secondary Palate Fusion and Disruption
Morphogenetic events are driven by cell-generated physical forces and complex cellular dynamics. To improve our capacity to predict developmental effects from cellular alterations, we built a multi-cellular agent-based model in CompuCell3D that recapitulates the cellular networks...
NASA Astrophysics Data System (ADS)
Goderniaux, Pascal; BrouyèRe, Serge; Blenkinsop, Stephen; Burton, Aidan; Fowler, Hayley J.; Orban, Philippe; Dassargues, Alain
2011-12-01
Several studies have highlighted the potential negative impact of climate change on groundwater reserves, but additional work is required to help water managers plan for future changes. In particular, existing studies provide projections for a stationary climate representative of the end of the century, although information is demanded for the near future. Such time-slice experiments fail to account for the transient nature of climatic changes over the century. Moreover, uncertainty linked to natural climate variability is not explicitly considered in previous studies. In this study we substantially improve upon the state-of-the-art by using a sophisticated transient weather generator in combination with an integrated surface-subsurface hydrological model (Geer basin, Belgium) developed with the finite element modeling software "HydroGeoSphere." This version of the weather generator enables the stochastic generation of large numbers of equiprobable climatic time series, representing transient climate change, and used to assess impacts in a probabilistic way. For the Geer basin, 30 equiprobable climate change scenarios from 2010 to 2085 have been generated for each of six different regional climate models (RCMs). Results show that although the 95% confidence intervals calculated around projected groundwater levels remain large, the climate change signal becomes stronger than that of natural climate variability by 2085. Additionally, the weather generator's ability to simulate transient climate change enabled the assessment of the likely time scale and associated uncertainty of a specific impact, providing managers with additional information when planning further investment. This methodology constitutes a real improvement in the field of groundwater projections under climate change conditions.
Reverse engineering systems models of regulation: discovery, prediction and mechanisms.
Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S
2012-08-01
Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hydropower Modeling Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Andrade, Juan; Cohen, Stuart
Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less
Infrared radiation scene generation of stars and planets in celestial background
NASA Astrophysics Data System (ADS)
Guo, Feng; Hong, Yaohui; Xu, Xiaojian
2014-10-01
An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.
NASA Astrophysics Data System (ADS)
Hristov, Y.; Oxley, G.; Žagar, M.
2014-06-01
The Bolund measurement campaign, performed by Danish Technical University (DTU) Wind Energy Department (also known as RISØ), provided significant insight into wind flow modeling over complex terrain. In the blind comparison study several modelling solutions were submitted with the vast majority being steady-state Computational Fluid Dynamics (CFD) approaches with two equation k-epsilon turbulence closure. This approach yielded the most accurate results, and was identified as the state-of-the-art tool for wind turbine generator (WTG) micro-siting. Based on the findings from Bolund, further comparison between CFD and field measurement data has been deemed essential in order to improve simulation accuracy for turbine load and long-term Annual Energy Production (AEP) estimations. Vestas Wind Systems A/S is a major WTG original equipment manufacturer (OEM) with an installed base of over 60GW in over 70 countries accounting for 19% of the global installed base. The Vestas Performance and Diagnostic Centre (VPDC) provides online live data to more than 47GW of these turbines allowing a comprehensive comparison between modelled and real-world energy production data. In previous studies, multiple sites have been simulated with a steady neutral CFD formulation for the atmospheric surface layer (ASL), and wind resource (RSF) files have been generated as a base for long-term AEP predictions showing significant improvement over predictions performed with the industry standard linear WAsP tool. In this study, further improvements to the wind resource file generation with CFD are examined using an unsteady diurnal cycle approach with a full atmospheric boundary layer (ABL) formulation, with the unique stratifications throughout the cycle weighted according to mesoscale simulated sectorwise stability frequencies.
Hinton, Devon E; Hofmann, Stefan G; Pitman, Roger K; Pollack, Mark H; Barlow, David H
2008-01-01
This article examines the ability of the panic attack-posttraumatic stress disorder (PTSD) model to predict how panic attacks are generated and how panic attacks worsen PTSD. The article does so by determining the validity of the panic attack-PTSD model in respect to one type of panic attack among traumatized Cambodian refugees: orthostatic panic (OP) attacks (i.e. panic attacks generated by moving from lying or sitting to standing). Among Cambodian refugees attending a psychiatric clinic, the authors conducted two studies to explore the validity of the panic attack-PTSD model as applied to OP patients (i.e. patients with at least one episode of OP in the previous month). In Study 1, the panic attack-PTSD model accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious-depressive distress (Symptom Checklist-90-R subscales), and OP severity significantly mediated the effect of anxious-depressive distress on Clinician-Administered PTSD Scale severity. In Study 2, as predicted by the panic attack-PTSD model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a cognitive behavioural therapy study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... proposed approval, including specific comments on NDEP's modeling and cost analysis of the RGGS BART Determination for NO X . See Modeling for the Reid Gardner Generating Station: Visibility Impacts in Class I... independent modeling analysis to evaluate the incremental visibility improvement attributable to the NO X...
Antiviral Defense and Innate Immune Memory in the Oyster.
Green, Timothy J; Speck, Peter
2018-03-16
The Pacific oyster, Crassostrea gigas , is becoming a valuable model for investigating antiviral defense in the Lophotrochozoa superphylum. In the past five years, improvements to laboratory-based experimental infection protocols using Ostreid herpesvirus I (OsHV-1) from naturally infected C. gigas combined with next-generation sequencing techniques has revealed that oysters have a complex antiviral response involving the activation of all major innate immune pathways. Experimental evidence indicates C. gigas utilizes an interferon-like response to limit OsHV-1 replication and spread. Oysters injected with a viral mimic (polyI:C) develop resistance to OsHV-1. Improved survival following polyI:C injection was found later in life (within-generational immune priming) and in the next generation (multi-generational immune priming). These studies indicate that the oyster's antiviral defense system exhibits a form of innate immune-memory. An important priority is to identify the molecular mechanisms responsible for this phenomenon. This knowledge will motivate the development of practical and cost-effective treatments for improving oyster health in aquaculture.
NASA Astrophysics Data System (ADS)
Kim, G. H.; Kim, A. R.; Kim, S.; Park, M.; Yu, I. K.; Seong, K. C.; Won, Y. J.
2011-11-01
Superconducting magnetic energy storage (SMES) system is a DC current driven device and can be utilized to improve power quality particularly in connection with renewable energy sources due to higher efficiency and faster response than other devices. This paper suggests a novel connection topology of SMES which can smoothen the output power flow of the wind power generation system (WPGS). The structure of the proposed system is cost-effective because it reduces a power converter in comparison with a conventional application of SMES. One more advantage of SMES in the proposed system is to improve the capability of low voltage ride through (LVRT) for the permanent magnet synchronous generator (PMSG) type WPGS. The proposed system including a SMES has been modeled and analyzed by a PSCAD/EMTDC. The simulation results show the effectiveness of the novel SMES application strategy to not only mitigate the output power of the PMSG but also improve the capability of LVRT for PMSG type WPGS.
Gao, Changwei; Liu, Xiaoming; Chen, Hai
2017-08-22
This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.
The generation of amplified spontaneous emission in high-power CPA laser systems.
Keppler, Sebastian; Sävert, Alexander; Körner, Jörg; Hornung, Marco; Liebetrau, Hartmut; Hein, Joachim; Kaluza, Malte Christoph
2016-03-01
An analytical model is presented describing the temporal intensity contrast determined by amplified spontaneous emission in high-intensity laser systems which are based on the principle of chirped pulse amplification. The model describes both the generation and the amplification of the amplified spontaneous emission for each type of laser amplifier. This model is applied to different solid state laser materials which can support the amplification of pulse durations ≤350 fs . The results are compared to intensity and fluence thresholds, e.g. determined by damage thresholds of a certain target material to be used in high-intensity applications. This allows determining if additional means for contrast improvement, e.g. plasma mirrors, are required for a certain type of laser system and application. Using this model, the requirements for an optimized high-contrast front-end design are derived regarding the necessary contrast improvement and the amplified "clean" output energy for a desired focussed peak intensity. Finally, the model is compared to measurements at three different high-intensity laser systems based on Ti:Sapphire and Yb:glass. These measurements show an excellent agreement with the model.
NASA Astrophysics Data System (ADS)
Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.
2017-12-01
Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.
New and Improved GLDAS and NLDAS Data Sets and Data Services at HDISC/NASA
NASA Technical Reports Server (NTRS)
Rui, Hualan; Beaudoing, Hiroko Kato; Mocko, David M.; Rodell, Matthew; Teng, William L.; Vollmer. Bruce
2010-01-01
Terrestrial hydrological variables are important in global hydrology, climate, and carbon cycle studies. Generating global fields of these variables, however, is still a challenge. The goal of a land data assimilation system (LDAS)is to ingest satellite-and ground-based observational data products, using advanced land surface modeling and data assimilation techniques, in order to generate optimal fields of land surface states and fluxes data and, thereby, facilitate hydrology and climate modeling, research, and forecast.
NASA Astrophysics Data System (ADS)
Gould, C. A.; Shammas, N. Y. A.; Grainger, S.; Taylor, I.; Simpson, K.
2012-06-01
This paper documents the 3D modeling and simulation of a three couple thermoelectric module using the Synopsys Technology Computer Aided Design (TCAD) semiconductor simulation software. Simulation results are presented for thermoelectric power generation, cooling and heating, and successfully demonstrate the basic thermoelectric principles. The 3D TCAD simulation model of a three couple thermoelectric module can be used in the future to evaluate different thermoelectric materials, device structures, and improve the efficiency and performance of thermoelectric modules.
Power generation using sugar cane bagasse: A heat recovery analysis
NASA Astrophysics Data System (ADS)
Seguro, Jean Vittorio
The sugar industry is facing the need to improve its performance by increasing efficiency and developing profitable by-products. An important possibility is the production of electrical power for sale. Co-generation has been practiced in the sugar industry for a long time in a very inefficient way with the main purpose of getting rid of the bagasse. The goal of this research was to develop a software tool that could be used to improve the way that bagasse is used to generate power. Special focus was given to the heat recovery components of the co-generation plant (economizer, air pre-heater and bagasse dryer) to determine if one, or a combination, of them led to a more efficient co-generation cycle. An extensive review of the state of the art of power generation in the sugar industry was conducted and is summarized in this dissertation. Based on this models were developed. After testing the models and comparing the results with the data collected from the literature, a software application that integrated all these models was developed to simulate the complete co-generation plant. Seven different cycles, three different pressures, and sixty-eight distributions of the flue gas through the heat recovery components can be simulated. The software includes an economic analysis tool that can help the designer determine the economic feasibility of different options. Results from running the simulation are presented that demonstrate its effectiveness in evaluating and comparing the different heat recovery components and power generation cycles. These results indicate that the economizer is the most beneficial option for heat recovery and that the use of waste heat in a bagasse dryer is the least desirable option. Quantitative comparisons of several possible cycle options with the widely-used traditional back-pressure turbine cycle are given. These indicate that a double extraction condensing cycle is best for co-generation purposes. Power generation gains between 40 and 100% are predicted for some cycles with the addition of optimum heat recovery systems.
More than Anecdotes: Fishers' Ecological Knowledge Can Fill Gaps for Ecosystem Modeling.
Bevilacqua, Ana Helena V; Carvalho, Adriana R; Angelini, Ronaldo; Christensen, Villy
2016-01-01
Ecosystem modeling applied to fisheries remains hampered by a lack of local information. Fishers' knowledge could fill this gap, improving participation in and the management of fisheries. The same fishing area was modeled using two approaches: based on fishers' knowledge and based on scientific information. For the former, the data was collected by interviews through the Delphi methodology, and for the latter, the data was gathered from the literature. Agreement between the attributes generated by the fishers' knowledge model and scientific model is discussed and explored, aiming to improve data availability, the ecosystem model, and fisheries management. The ecosystem attributes produced from the fishers' knowledge model were consistent with the ecosystem attributes produced by the scientific model, and elaborated using only the scientific data from literature. This study provides evidence that fishers' knowledge may suitably complement scientific data, and may improve the modeling tools for the research and management of fisheries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; Baker, Benjamin Allen; Schunert, Sebastian
The INL is currently evolving the modeling and simulation (M&S) capability that will enable improved core operation as well as design and analysis of TREAT experiments. This M&S capability primarily uses MAMMOTH, a reactor physics application being developed under Multi-physics Object Oriented Simulation Environment (MOOSE) framework. MAMMOTH allows the coupling of a number of other MOOSE-based applications. This second year of work has been devoted to the generation of a deterministic reference solution for the full core, the preparation of anisotropic diffusion coefficients, the testing of the SPH equivalence method, and the improvement of the control rod modeling. In addition,more » this report includes the progress made in the modeling of the M8 core configuration and experiment vehicle since January of this year.« less
Challenges and potential solutions for European coastal ocean modelling
NASA Astrophysics Data System (ADS)
She, Jun; Stanev, Emil
2017-04-01
Coastal operational oceanography is a science and technological platform to integrate and transform the outcomes in marine monitoring, new knowledge generation and innovative technologies into operational information products and services in the coastal ocean. It has been identified as one of the four research priorities by EuroGOOS (She et al. 2016). Coastal modelling plays a central role in such an integration and transformation. A next generation coastal ocean forecasting system should have following features: i) being able to fully exploit benefits from future observations, ii) generate meaningful products in finer scales e.g., sub-mesoscale and in estuary-coast-sea continuum, iii) efficient parallel computing and model grid structure, iv) provide high quality forecasts as forcing to NWP and coastal climate models, v) resolving correctly inter-basin and inter-sub-basin water exchange, vi) resolving synoptic variability and predictability in marine ecosystems, e.g., for algae bloom, vi) being able to address critical and relevant issues in coastal applications, e.g., marine spatial planning, maritime safety, marine pollution protection, disaster prevention, offshore wind energy, climate change adaptation and mitigation, ICZM (integrated coastal zone management), the WFD (Water Framework Directive), and the MSFD (Marine Strategy Framework Directive), especially on habitat, eutrophication, and hydrographic condition descriptors. This presentation will address above challenges, identify limits of current models and propose correspondent research needed. The proposed roadmap will address an integrated monitoring-modelling approach and developing Unified European Coastal Ocean Models. In the coming years, a few new developments in European Sea observations can expected, e.g., more near real time delivering on profile observations made by research vessels, more shallow water Argo floats and bio-Argo floats deployed, much more high resolution sea level data from SWOT and on-going altimetry missions, contributing to resolving (sub-)mesoscale eddies, more currents measurements from ADCPs and HF radars, geostationary data for suspended sediment and diurnal observations from satellite SST products. These developments will make it possible to generate new knowledge and build up new capacities for modelling and forecasting systems, e.g., improved currents forecast, improved water skin temperature and surface winds forecast, improved modelling and forecast of (sub) mesoscale activities and drift forecast, new forecast capabilities on SPM (Suspended Particle Matter) and algae bloom. There will be much more in-situ and satellite data available for assimilation. The assimilation of sea level, chl-a, ferrybox and profile observations will greatly improves the ocean-ice-ecosystem forecast quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zinaman, Owen
This presentation details the 21st Century Power Partnership's fellowship program accomplishments from 2016. This fellowship brought two fellows from South Africa's power utility, Eskom, to the U.S. Department of Energy's National Renewable Energy Laboratory. The fellows spent two weeks working to improve the fidelity of Eskom's PLEXOS long-term and short-term models, which are used in long-term generation planning exercises and capacity adequacy assessments. The fellows returned to Eksom equipped with a new suite of tools and skills to enhance Eksom's PLEXOS modeling capabilities.
A slow fashion design model for bluejeans using house of quality approach
NASA Astrophysics Data System (ADS)
Nergis, B.; Candan, C.; Sarısaltık, S.; Seneloglu, N.; Bozuk, R.; Amzayev, K.
2017-10-01
The purpose of this study was to develop a slow fashion design model using the house of quality model (HOQ) to provide fashion designers a tool to improve the overall sustainability of denim jeans for Y generation consumers in Turkish market. In doing so, a survey was conducted to collect data on the design & performance expectations as well as the perception of slow fashion in design process of denim jeans of the targeted consumer group. The results showed that Y generation in the market gave the most importance to the sustainable production techniques when identifying slow fashion.
Qin, Mohan; Ping, Qingyun; Lu, Yaobin; Abu-Reesh, Ibrahim M; He, Zhen
2015-11-01
Osmotic microbial fuel cells (OsMFCs) are a new type of MFCs with integrating forward osmosis (FO). However, it is not well understood why electricity generation is improved in OsMFCs compared to regular MFCs. Herein, an approach integrating experimental investigation and mathematical model was adopted to address the question. Both an OsMFC and an MFC achieved similar organic removal efficiency, but the OsMFC generated higher current than the MFC with or without water flux, resulting from the lower resistance of FO membrane. Combining NaCl and glucose as a catholyte demonstrated that the catholyte conductivity affected the electricity generation in the OsMFC. A mathematical model of OsMFCs was developed and validated with the experimental data. The model predicated the variation of internal resistance with increasing water flux, and confirmed the importance of membrane resistance. Increasing water flux with higher catholyte conductivity could decrease the membrane resistance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Critical maternal health knowledge gaps in low- and middle-income countries for the post-2015 era.
Kendall, Tamil; Langer, Ana
2015-06-05
Effective interventions to promote maternal health and address obstetric complications exist, however 800 women die every day during pregnancy and childbirth from largely preventable causes and more than 90% of these deaths occur in low and middle income countries (LMIC). In 2014, the Maternal Health Task Force consulted 26 global maternal health researchers to identify persistent and critical knowledge gaps to be filled to reduce maternal morbidity and mortality and improve maternal health. The vision of maternal health articulated was comprehensive and priorities for knowledge generation encompassed improving the availability, accessibility, acceptability, and quality of institutional labor and delivery services and other effective interventions, such as contraception and safe abortion services. Respondents emphasized the need for health systems research to identify models that can deliver what is known to be effective to prevent and treat the main causes of maternal death at scale in different contexts and to sustain coverage and quality over time. Researchers also emphasized the development of tools to measure quality of care and promote ongoing quality improvement at the facility, district, and national level. Knowledge generation to improve distribution and retention of healthcare workers, facilitate task shifting, develop and evaluate training models to improve "hands-on" skills and promote evidence-based practice, and increase managerial capacity at different levels of the health system were also prioritized. Interviewees noted that attitudes, behavior, and power relationships between health professionals and within institutions must be transformed to achieve coverage of high-quality maternal health services in LMIC. The increasing burden of non-communicable diseases, urbanization, and the persistence of social and economic inequality were identified as emerging challenges that require knowledge generation to improve health system responses and evaluate progress. Respondents emphasized evaluating effectiveness, feasibility, and equity impacts of health system interventions. A prominent role for implementation science, evidence for policy advocacy, and interdisciplinary collaboration were identified as critical areas for knowledge generation to improve maternal health in the post-2015 era.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
Improving the Representation of Land in Climate Models by Application of EOS Observations
NASA Technical Reports Server (NTRS)
2004-01-01
The PI's IDS current and previous investigation has focused on the applications of the land data toward the improvement of climate models. The previous IDS research identified the key factors limiting the accuracy of climate models to be the representation of albedos, land cover, fraction of landscape covered by vegetation, roughness lengths, surface skin temperature and canopy properties such as leaf area index (LAI) and average stomatal conductance. Therefore, we assembled a team uniquely situated to focus on these key variables and incorporate the remotely sensed measures of these variables into the next generation of climate models.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
Progress and opportunities in EELS and EDS tomography.
Collins, Sean M; Midgley, Paul A
2017-09-01
Electron tomography using energy loss and X-ray spectroscopy in the electron microscope continues to develop in rapidly evolving and diverse directions, enabling new insight into the three-dimensional chemistry and physics of nanoscale volumes. Progress has been made recently in improving reconstructions from EELS and EDS signals in electron tomography by applying compressed sensing methods, characterizing new detector technologies in detail, deriving improved models of signal generation, and exploring machine learning approaches to signal processing. These disparate threads can be brought together in a cohesive framework in terms of a model-based approach to analytical electron tomography. Models incorporate information on signal generation and detection as well as prior knowledge of structures in the spectrum image data. Many recent examples illustrate the flexibility of this approach and its feasibility for addressing challenges in non-linear or limited signals in EELS and EDS tomography. Further work in combining multiple imaging and spectroscopy modalities, developing synergistic data acquisition, processing, and reconstruction approaches, and improving the precision of quantitative spectroscopic tomography will expand the frontiers of spatial resolution, dose limits, and maximal information recovery. Copyright © 2017 Elsevier B.V. All rights reserved.
Green Energy Options for Consumer-Owned Business
DOE Office of Scientific and Technical Information (OSTI.GOV)
Co-opPlus of Western Massachusetts
2006-05-01
The goal of this project was to define, test, and prototype a replicable business model for consumer-owned cooperatives. The result is a replicable consumer-owned cooperative business model for the generation, interconnection, and distribution of renewable energy that incorporates energy conservation and efficiency improvements.
Improving Perceptual Skills with 3-Dimensional Animations.
ERIC Educational Resources Information Center
Johns, Janet Faye; Brander, Julianne Marie
1998-01-01
Describes three-dimensional computer aided design (CAD) models for every component in a representative mechanical system; the CAD models made it easy to generate 3-D animations that are ideal for teaching perceptual skills in multimedia computer-based technical training. Fifteen illustrations are provided. (AEF)
Use of temperature to improve West Nile virus forecasts
Schneider, Zachary D.; Caillouet, Kevin A.; Campbell, Scott R.; Damian, Dan; Irwin, Patrick; Jones, Herff M. P.; Townsend, John
2018-01-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that on average increased absolute forecast accuracy 5%, 10%, 12%, and 6%, respectively, over the non-temperature forced baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperature influences rates of WNV transmission. The findings provide a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs. PMID:29522514
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...
2018-05-14
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
NASA Astrophysics Data System (ADS)
Jonny, Zagloed, Teuku Yuri M.
2017-11-01
This paper aims to present an integrated health care model for Indonesian health care industry. Based on previous researches, there are two health care models in the industry such as decease- and patient-centered care models. In their developments, the patient-centered care model is widely applied due to its capability in reducing cost and improving quality simultaneously. However, there is still no comprehensive model resulting in cost reduction, quality improvement, patient satisfaction and hospital profitability simultaneously. Therefore, this research is intended to develop that model. In doing so, first, a conceptual model using Kano's Model, Quality Function Deployment (QFD) and Balanced Scorecard (BSC) is developed to generate several important elements of the model as required by stakeholders. Then, a case study of an Indonesian hospital is presented to evaluate the validity of the model using correlation analysis. As a result, it can be concluded that the model is validated implying several managerial insights among its elements such as l) leadership (r=0.85) and context of the organization (r=0.77) improve operations; 2) planning (r=0.96), support process (r=0.87) and continual improvement (r=0.95) also improve operations; 3) operations improve customer satisfaction (r=0.89) and financial performance (r=0.93) and 4) customer satisfaction improves the financial performance (0.98).
Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia
2018-05-17
The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei
2017-01-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…
Integrating satellite imagery with simulation modeling to improve burn severity mapping
Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon
2014-01-01
Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...
Social preferences toward energy generation with woody biomass from public forests in Montana, USA
Robert M. Campbell; Tyron J. Venn; Nathaniel M. Anderson
2016-01-01
In Montana, USA, there are substantial opportunities for mechanized thinning treatments on public forests to reduce the likelihood of severe and damaging wildfires and improve forest health. These treatments produce residues that can be used to generate renewable energy and displace fossil fuels. The choice modeling method is employed to examine the marginal...
ERIC Educational Resources Information Center
Matthews-Lopez, Joy L.; Hombo, Catherine M.
The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…
Efficient generation of connectivity in neuronal networks from simulator-independent descriptions
Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.
2014-01-01
Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620
NASA Astrophysics Data System (ADS)
Song, S. Y.; Liu, Q. H.; Zhao, Y. N.; Liu, S. Y.
2016-08-01
With the rapid development of wind power generation, the related research of wind power control and integration issues has attracted much attention, and the focus of the research are shifting away from the ideal power grid environment to the actual power grid environment. As the main stream wind turbine generator, a doubly-fed induction generator (DFIG) is connected to the power grid directly by its stator, so it is particularly sensitive to the power grid. This paper studies the improvement of DFIG control technology in the power grid harmonic environment. Based on the DFIG dynamic model considering the power grid harmonic environment, this paper introduces the shortcomings of the common control strategy of DFIG, and puts forward the enhanced method. The decoupling control of the system is realized by compensating the coupling between the rotor harmonic voltage and harmonic current, improving the control performance. In addition, the simulation experiments on PSCAD/EMTDC are carried out to verify the correctness and effectiveness of the improved scheme.
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-07-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-04-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
NASA Astrophysics Data System (ADS)
Kotulla, Ralf
2012-10-01
Over its lifespan Hubble has invested significant effort into detailed observations of galaxies both in the local and distant universe. To extract the physical information from the observed {spectro-}photometry requires detailed and accurate models. Stellar population synthesis models are frequently used to obtain stellar masses, star formation rate, galaxy ages and star formation histories. Chemical evolution models offer another valuable and complementary approach to gain insight into many of the same aspects, yet these two methods have rarely been used in combination.Our proposed next generation of galaxy evolution models will help us improve our understanding of how galaxies form and evolve. Building on GALEV evolutionary synthesis models we incorporate state-of-the-art input physics for stellar evolution of binaries and rotating stars as well as new spectral libraries well matched to the modern observational capabilities. Our improved chemical evolution model allows us to self-consistently trace abundances of individual elements, fully accounting for the increasing initial abundances of successive stellar generations. GALEV will support variable Initial Mass Functions {IMF}, enabling us to test recent observational findings of a non-universal IMF by predicting chemical properties and integrated spectra in an integrated and consistent manner.HST is the perfect instrument for testing this approach. Its wide wavelength coverage from UV to NIR enables precise SED fitting, and with its spatial resolution we can compare the inferred chemical evolution to studies of star clusters and resolved stellar populations in nearby galaxies.
Further experimentation on bubble generation during transformer overload
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oommen, T.V.
1992-03-01
This report covers additional work done during 1990 and 1991 on gas bubble generation under overload conditions. To improve visual bubble detection, a single disc coil was used. To further improve detection, a corona device was also used which signaled the onset of corona activity in the early stages of bubble formation. A total of fourteen model tests were conducted, half of which used the Inertaire system, and the remaining, a conservator (COPS). Moisture content of paper in the coil varied from 1.0% to 8.0%; gas (nitrogen) content varied from 1.0% to 8.8%. The results confirmed earlier observations that themore » mathematical bubble prediction model was not valid for high gas content model with relatively low moisture levels in the coil. An empirical relationship was formulated to accurately predict bubble evolution temperatures from known moisture and gas content values. For low moisture content models (below 2%), the simple Piper relationship was sufficient to predict bubble evolution temperatures, regardless of gas content. Moisture in the coil appears to be the key factor in bubble generation. Gas blanketed (Inertaire) systems do not appear to be prone to premature bubble generation from overloads as previously thought. The new bubble prediction model reveals that for a coil with 2% moisture, the bubble evolution temperature would be about 140{degrees}C. Since old transformers in service may have as much as 2% moisture in paper, the 140{degrees}C bubble evolution temperature may be taken as the lower limit of bubble evolution temperature under overload conditions for operating transformers. Drier insulation would raise the bubble evolution temperature.« less
Performance evaluation of an automotive thermoelectric generator
NASA Astrophysics Data System (ADS)
Dubitsky, Andrei O.
Around 40% of the total fuel energy in typical internal combustion engines (ICEs) is rejected to the environment in the form of exhaust gas waste heat. Efficient recovery of this waste heat in automobiles can promise a fuel economy improvement of 5%. The thermal energy can be harvested through thermoelectric generators (TEGs) utilizing the Seebeck effect. In the present work, a versatile test bench has been designed and built in order to simulate conditions found on test vehicles. This allows experimental performance evaluation and model validation of automotive thermoelectric generators. An electrically heated exhaust gas circuit and a circulator based coolant loop enable integrated system testing of hot and cold side heat exchangers, thermoelectric modules (TEMs), and thermal interface materials at various scales. A transient thermal model of the coolant loop was created in order to design a system which can maintain constant coolant temperature under variable heat input. Additionally, as electrical heaters cannot match the transient response of an ICE, modelling was completed in order to design a relaxed exhaust flow and temperature history utilizing the system thermal lag. This profile reduced required heating power and gas flow rates by over 50%. The test bench was used to evaluate a DOE/GM initial prototype automotive TEG and validate analytical performance models. The maximum electrical power generation was found to be 54 W with a thermal conversion efficiency of 1.8%. It has been found that thermal interface management is critical for achieving maximum system performance, with novel designs being considered for further improvement.
Validated numerical simulation model of a dielectric elastomer generator
NASA Astrophysics Data System (ADS)
Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.
2013-04-01
Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.
Flow quality studies of the NASA Lewis Research Center Icing Research Tunnel diffuser
NASA Technical Reports Server (NTRS)
Arrington, E. Allen; Pickett, Mark T.; Sheldon, David W.
1994-01-01
The purpose was to document the airflow characteristics in the diffuser of the NASA Lewis Research Center Icing Research Tunnel and to determine the effects of vortex generators on the flow quality in the diffuser. The results were used to determine how to improve the flow in this portion of the tunnel so that it can be more effectively used as an icing test section and such that overall tunnel efficiency can be improved. The demand for tunnel test time and the desire to test models that are too large for the test section were two of the drivers behind this diffuser study. For all vortex generator configurations tested, the flow quality was improved.
On the virtues of automated quantitative structure-activity relationship: the new kid on the block.
de Oliveira, Marcelo T; Katekawa, Edson
2018-02-01
Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.
Ecological Development through Service-Learning
ERIC Educational Resources Information Center
Baker, Daniel
2006-01-01
This article describes a successful model used in international service-learning projects that integrates economic development and ecological improvement. The principles of the model are discussed, including commitments to maintain partnerships over time, emphasize the transfer of knowledge from one generation of students to the next, start small,…
DOT National Transportation Integrated Search
2014-06-01
In June 2012, the Environmental Protection Agency (EPA) released the Operating Mode : Distribution Generator (OMDG) a tool for developing an operating mode distribution as an input : to the Motor Vehicle Emissions Simulator model (MOVES). The t...
Vehicle Modeling for Future Generation Transportation Simulation
DOT National Transportation Integrated Search
2009-05-10
Recent development of inter-vehicular wireless communication technologies have motivated many innovative applications aiming at significantly increasing traffic throughput and improving highway safety. Powerful traffic simulation is an indispensable ...
NASA Astrophysics Data System (ADS)
Rendón, A.; Posada, J. A.; Salazar, J. F.; Mejia, J.; Villegas, J.
2016-12-01
Precipitation in the complex terrain of the tropical Andes of South America can be strongly reduced during El Niño events, with impacts on numerous societally-relevant services, including hydropower generation, the main electricity source in Colombia. Simulating rainfall patterns and behavior in such areas of complex terrain has remained a challenge for regional climate models. Current data products such as ERA-Interim and other reanalysis and modelling products generally fail to correctly represent processes at scales that are relevant for these processes. Here we assess the added value to ERA-Interim by dynamical downscaling using the WRF regional climate model, including a comparison of different cumulus parameterization schemes. We found that WRF improves the representation of precipitation during the dry season of El Niño (DJF) events using a 1996-2014 observation period. Further, we use these improved capability to simulate an extreme deforestation scenario under El Niño conditions for an area in the central Andes of Colombia, where a big proportion of the country's hydropower is generated. Our results suggest that forests dampen the effects of El Niño on precipitation. In synthesis, our results illustrate the utility of regional modelling to improve data sources, as well as their potential for predicting the local-to-regional effects of global-change-type processes in regions with limited data availability.
Woods, J
2001-01-01
The third generation cardiac institute will build on the successes of the past in structuring the service line, re-organizing to assimilate specialist interests, and re-positioning to expand cardiac services into cardiovascular services. To meet the challenges of an increasingly competitive marketplace and complex delivery system, the focus for this new model will shift away from improved structures, and toward improved processes. This shift will require a sound methodology for statistically measuring and sustaining process changes related to the optimization of cardiovascular care. In recent years, GE Medical Systems has successfully applied Six Sigma methodologies to enable cardiac centers to control key clinical and market development processes through its DMADV, DMAIC and Change Acceleration processes. Data indicates Six Sigma is having a positive impact within organizations across the United States, and when appropriately implemented, this approach can serve as a solid foundation for building the next generation cardiac institute.
THE PANIC ATTACK–PTSD MODEL: APPLICABILITY TO ORTHOSTATIC PANIC AMONG CAMBODIAN REFUGEES
Hinton, Devon E.; Hofmann, Stefan G.; Pitman, Roger K.; Pollack, Mark H.; Barlow, David H.
2009-01-01
This article examines the ability of the “Panic Attack–PTSD Model” to predict how panic attacks are generated and how panic attacks worsen posttraumatic stress disorder (PTSD). The article does so by determining the validity of the Panic Attack–PTSD Model in respect to one type of panic attacks among traumatized Cambodian refugees: orthostatic panic (OP) attacks, that is, panic attacks generated by moving from lying or sitting to standing. Among Cambodian refugees attending a psychiatric clinic, we conducted two studies to explore the validity of the Panic Attack–PTSD Model as applied to OP patients, meaning patients with at least one episode of OP in the previous month. In Study 1, the “Panic Attack–PTSD Model” accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious–depressive distress (SCL subscales), and OP severity significantly mediated the effect of anxious–depressive distress on CAPS severity. In Study 2, as predicted by the Panic Attack–PTSD Model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a CBT study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity. PMID:18470741
Solar Dynamic Power System Stability Analysis and Control
NASA Technical Reports Server (NTRS)
Momoh, James A.; Wang, Yanchun
1996-01-01
The objective of this research is to conduct dynamic analysis, control design, and control performance test of solar power system. Solar power system consists of generation system and distribution network system. A bench mark system is used in this research, which includes a generator with excitation system and governor, an ac/dc converter, six DDCU's and forty-eight loads. A detailed model is used for modeling generator. Excitation system is represented by a third order model. DDCU is represented by a seventh order system. The load is modeled by the combination of constant power and constant impedance. Eigen-analysis and eigen-sensitivity analysis are used for system dynamic analysis. The effects of excitation system, governor, ac/dc converter control, and the type of load on system stability are discussed. In order to improve system transient stability, nonlinear ac/dc converter control is introduced. The direct linearization method is used for control design. The dynamic analysis results show that these controls affect system stability in different ways. The parameter coordination of controllers are recommended based on the dynamic analysis. It is concluded from the present studies that system stability is improved by the coordination of control parameters and the nonlinear ac/dc converter control stabilize system oscillation caused by the load change and system fault efficiently.
An improved empirical model for diversity gain on Earth-space propagation paths
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1981-01-01
An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.
Kurtzman, Gary
2005-10-01
Venture capital has tended to shy away from diagnostics companies, whose products are not predicated on the blockbuster model of pharmaceuticals. But several new diagnostics companies are developing products that hold immense potential to improve healthcare delivery. Here's why venture investors should take another look at the diagnostics area.
Modelling the effect of structural QSAR parameters on skin penetration using genetic programming
NASA Astrophysics Data System (ADS)
Chung, K. K.; Do, D. Q.
2010-09-01
In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.
Improved Decadal Climate Prediction in the North Atlantic using EnOI-Assimilated Initial Condition
NASA Astrophysics Data System (ADS)
Li, Q.; Xin, X.; Wei, M.; Zhou, W.
2017-12-01
Decadal prediction experiments of Beijing Climate Center climate system model version 1.1(BCC-CSM1.1) participated in Coupled Model Intercomparison Project Phase 5 (CMIP5) had poor skill in extratropics of the North Atlantic, the initialization of which was done by relaxing modeled ocean temperature to the Simple Ocean Data Assimilation (SODA) reanalysis data. This study aims to improve the prediction skill of this model by using the assimilation technique in the initialization. New ocean data are firstly generated by assimilating the sea surface temperature (SST) of the Hadley Centre Sea Ice and Sea Surface Temperature (HadISST) dataset to the ocean model of BCC-CSM1.1 via Ensemble Optimum Interpolation (EnOI). Then a suite of decadal re-forecasts launched annually over the period 1961-2005 is carried out with simulated ocean temperature restored to the assimilated ocean data. Comparisons between the re-forecasts and previous CMIP5 forecasts show that the re-forecasts are more skillful in mid-to-high latitude SST of the North Atlantic. Improved prediction skill is also found for the Atlantic multi-decadal Oscillation (AMO), which is consistent with the better skill of Atlantic meridional overturning circulation (AMOC) predicted by the re-forecasts. We conclude that the EnOI assimilation generates better ocean data than the SODA reanalysis for initializing decadal climate prediction of BCC-CSM1.1 model.
Multi-model analysis in hydrological prediction
NASA Astrophysics Data System (ADS)
Lanthier, M.; Arsenault, R.; Brissette, F.
2017-12-01
Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.
Alfred, Michael; Chung, Christopher A
2012-12-01
This paper describes a second generation Simulator for Engineering Ethics Education. Details describing the first generation activities of this overall effort are published in Chung and Alfred (Sci Eng Ethics 15:189-199, 2009). The second generation research effort represents a major development in the interactive simulator educational approach. As with the first generation effort, the simulator places students in first person perspective scenarios involving different types of ethical situations. Students must still gather data, assess the situation, and make decisions. The approach still requires students to develop their own ability to identify and respond to ethical engineering situations. However, were as, the generation one effort involved the use of a dogmatic model based on National Society of Professional Engineers' Code of Ethics, the new generation two model is based on a mathematical model of the actual experiences of engineers involved in ethical situations. This approach also allows the use of feedback in the form of decision effectiveness and professional career impact. Statistical comparisons indicate a 59 percent increase in overall knowledge and a 19 percent improvement in teaching effectiveness over an Internet Engineering Ethics resource based approach.
NASA Astrophysics Data System (ADS)
Iváncsy, T.; Kiss, I.; Szücs, L.; Tamus, Z. Á.
2015-10-01
The lightning current generates time-varying magnetic field near the down- conductor and the down-conductors are mounted on the wall of the buildings where residential places might be situated. It is well known that the rapidly changing magnetic fields can generate dangerous eddy currents in the human body.The higher duration and gradient of the magnetic field can cause potentially life threatening cardiac stimulation. The coupling mechanism between the electromagnetic field and the human body is based on a well-known physical phenomena (e.g. Faradays law of induction). However, the calculation of the induced current is very complicated because the shape of the organs is complex and the determination of the material properties of living tissues is difficult, as well. Our previous study revealed that the cardiac stimulation is independent of the rising time of the lightning current and only the peak of the current counts. In this study, the authors introduce an improved model of the interaction of electromagnetic fields of lighting current near down-conductor and human body. Our previous models are based on the quasi stationer field calculations, the new improved model is a transient model. This is because the magnetic field around the down-conductor and in the human body can be determined more precisely, therefore the dangerous currents in the body can be estimated.
"ELIP-MARC" Activities via TPS of Cooperative Learning to Improve Student's Mathematical Reasoning
ERIC Educational Resources Information Center
Ulya, Wisulah Titah; Purwanto; Parta, I. Nengah; Mulyati, Sri
2017-01-01
The purpose of this study is to describe and generate interaction model of learning through "Elip-Marc" activity via "TPS" cooperative learning in order to improve student's mathematical reasoning who have valid, practical and effective criteria. "Elip-Marc" is an acronym of eliciting, inserting, pressing,…
New Research Strengthens Home Visiting Field: The Pew Home Visiting Campaign
ERIC Educational Resources Information Center
Doggett, Libby
2013-01-01
Extensive research has shown that home visiting parental education programs improve child and family outcomes, and they save money for states and taxpayers. Now, the next generation of research is deepening understanding of those program elements that are essential to success, ways to improve existing models, and factors to consider in tailoring…
2011-12-30
improvements also significantly increase anomaly strength while sharpening the anomaly edges to create stronger and more pronounced tectonic structures. The...continental deformation and crustal thickening is occurring, the wave speeds are substantially slower. This Asian north-to-south, fast-to-slow wave speed
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
NASA Astrophysics Data System (ADS)
Rolfe, John; Windle, Jill
2011-12-01
Policymakers wanting to increase protection of the Great Barrier Reef from pollutants generated by agriculture need to identify when measures to improve water quality generate benefits to society that outweigh the costs involved. The research reported in this paper makes a contribution in several ways. First, it uses the improved science understanding about the links between management changes and reef health to bring together the analysis of costs and benefits of marginal changes, helping to demonstrate the appropriate way of addressing policy questions relating to reef protection. Second, it uses the scientific relationships to frame a choice experiment to value the benefits of improved reef health, with the results of mixed logit (random parameter) models linking improvements explicitly to changes in "water quality units." Third, the research demonstrates how protection values are consistent across a broader population, with some limited evidence of distance effects. Fourth, the information on marginal costs and benefits that are reported provide policymakers with information to help improve management decisions. The results indicate that while there is potential for water quality improvements to generate net benefits, high cost water quality improvements are generally uneconomic. A major policy implication is that cost thresholds for key pollutants should be set to avoid more expensive water quality proposals being selected.
A retrospective evaluation of traffic forecasting techniques.
DOT National Transportation Integrated Search
2016-08-01
Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...
NREL Model Car Competitions | NREL
skills in both math and science. The goals of the competition include: Generating enthusiasm for science , technology, engineering, and math (STEM) Improving students' understanding of scientific concepts and
tools,data, and models that are: improving air quality; helping communities become more resilient; reducing emissions of carbon and other pollutants; ushering in new generations of safer, more sustainable chemicals, advancing safe drinking water resources
Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors
Pagliari, Diana; Pinto, Livio
2015-01-01
In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979
Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors.
Pagliari, Diana; Pinto, Livio
2015-10-30
In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.
Upgrades of Two Computer Codes for Analysis of Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2005-01-01
Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.
Hinton, Devon E; Hofmann, Stefan G; Pollack, Mark H; Otto, Michael W
2009-01-01
Based on the results of a randomized controlled trial, we examined a model of the mechanisms of efficacy of culturally adapted cognitive-behavior therapy (CBT) for Cambodian refugees with pharmacology-resistant posttraumatic stress disorder (PTSD) and comordid orthostatic panic attacks (PAs). Twelve patients were in the initial treatment condition, 12 in the delayed treatment condition. The patients randomized to CBT had much greater improvement than patients in the waitlist condition on all psychometric measures and on one physiological measure-the systolic blood pressure response to orthostasis (d = 1.31)-as evaluated by repeated-measures MANOVA and planned contrasts. After receiving CBT, the Delayed Treatment Group improved on all measures, including the systolic blood pressure response to orthostasis. The CBT treatment's reduction of PTSD severity was significantly mediated by improvement in orthostatic panic and emotion regulation ability. The current study supports our model of the generation of PTSD in the Cambodian population, and suggests a key role of decreased vagal tone in the generation of orthostatic panic and PTSD in this population. It also suggests that vagal tone is involved in emotion regulation, and that both vagal tone and emotion regulation improve across treatment.
J. X. Zhang; J. Q. Wu; K. Chang; W. J. Elliot; S. Dun
2009-01-01
The recent modification of the Water Erosion Prediction Project (WEPP) model has improved its applicability to hydrology and erosion modeling in forest watersheds. To generate reliable topographic and hydrologic inputs for the WEPP model, carefully selecting digital elevation models (DEMs) with appropriate resolution and accuracy is essential because topography is a...
Shirai, Hiroki; Ikeda, Kazuyoshi; Yamashita, Kazuo; Tsuchiya, Yuko; Sarmiento, Jamica; Liang, Shide; Morokata, Tatsuaki; Mizuguchi, Kenji; Higo, Junichi; Standley, Daron M; Nakamura, Haruki
2014-08-01
In the second antibody modeling assessment, we used a semiautomated template-based structure modeling approach for 11 blinded antibody variable region (Fv) targets. The structural modeling method involved several steps, including template selection for framework and canonical structures of complementary determining regions (CDRs), homology modeling, energy minimization, and expert inspection. The submitted models for Fv modeling in Stage 1 had the lowest average backbone root mean square deviation (RMSD) (1.06 Å). Comparison to crystal structures showed the most accurate Fv models were generated for 4 out of 11 targets. We found that the successful modeling in Stage 1 mainly was due to expert-guided template selection for CDRs, especially for CDR-H3, based on our previously proposed empirical method (H3-rules) and the use of position specific scoring matrix-based scoring. Loop refinement using fragment assembly and multicanonical molecular dynamics (McMD) was applied to CDR-H3 loop modeling in Stage 2. Fragment assembly and McMD produced putative structural ensembles with low free energy values that were scored based on the OSCAR all-atom force field and conformation density in principal component analysis space, respectively, as well as the degree of consensus between the two sampling methods. The quality of 8 out of 10 targets improved as compared with Stage 1. For 4 out of 10 Stage-2 targets, our method generated top-scoring models with RMSD values of less than 1 Å. In this article, we discuss the strengths and weaknesses of our approach as well as possible directions for improvement to generate better predictions in the future. © 2014 Wiley Periodicals, Inc.
Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
Objectives To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. Methods 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Results Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Conclusion Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels. PMID:26288186
Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.
Sustaining Fidelity Following the Nationwide PMTO™ Implementation in Norway
Forgatch, Marion S.; DeGarmo, David S.
2011-01-01
This report describes three studies from the nationwide Norwegian implementation of Parent Management Training – Oregon Model (PMTO™), an empirically supported treatment for families of children with behavior problems (Forgatch and Patterson 2010). Separate stages of the implementation were evaluated using a fidelity measure based on direct observation of intervention sessions. Study 1 assessed growth in fidelity observed early, mid, and late in the training of a group of practitioners. We hypothesized increased fidelity and decreased variability in practice. Study 2 evaluated method fidelity over the course of three generations of practitioners trained in PMTO. Generation 1 (G1) was trained by the PMTO developer/purveyors; Generation 2 (G2) was trained by selected G1 Norwegian trainers; and Generation 3 (G3) was trained by G1 and G2 trainers. We hypothesized decrease in fidelity with each generation. Study 3 tested the predictive validity of fidelity in a cross-cultural replication, hypothesizing that higher fidelity scores would correlate with improved parenting practices observed in parent-child interactions before and after treatment. In Study 1, trainees' performance improved and became more homogeneous as predicted. In Study 2, a small decline in fidelity followed the transfer from the purveyor trainers to Norwegian trainers in G2, but G3 scores were equivalent to those attained by G1. Thus, the hypothesis was not fully supported. Finally, the FIMP validity model replicated; PMTO fidelity significantly contributed to improvements in parenting practices from pre- to post-treatment. The data indicate that PMTO was transferred successfully to Norwegian implementation with sustained fidelity and cross-cultural generalization. PMID:21671090
The United States Environmental Protection Agency's (EPA) National Exposure Research Laboratory is developing improved methods for modeling the source through the air pathway to human exposure in significant microenvironments of exposure. As a part of this project, we develope...
Crop parameters for modeling sugarcane under rainfed conditions in Mexico
USDA-ARS?s Scientific Manuscript database
Crop models with well-tested parameters can improve sugarcane productivity for food and biofuel generation. This study aimed to (i) calibrate the light extinction coefficient (k) and other crop parameters for the sugarcane cultivar CP 72-2086, an early-maturing cultivar grown in Mexico and many oth...
Socialization of Young Children: Successful Principles and Models.
ERIC Educational Resources Information Center
Schieser, Hans A.
This discussion focuses on several principles and models of early childhood education which have been used to improve children's ability to live and work with others. Modern problems of socialization in early childhood are discussed in terms of the generation gap in industrial societies, the development of the "private" family with…
Baryon production from cluster hadronisation
NASA Astrophysics Data System (ADS)
Gieseke, Stefan; Kirchgaeßer, Patrick; Plätzer, Simon
2018-02-01
We present an extension to the colour reconnection model in the Monte Carlo event generator Herwig to account for the production of baryons and compare it to a series of observables for soft physics. The new model is able to improve the description of charged-particle multiplicities and hadron flavour observables in pp collisions.
Can dynamically downscaled climate model outputs improve pojections of extreme precipitation events?
Many of the storms that generate damaging floods are caused by locally intense, sub-daily precipitation, yet the spatial and temporal resolution of the most widely available climate model outputs are both too coarse to simulate these events. Thus there is often a disconnect betwe...
Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling
ERIC Educational Resources Information Center
Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.
2018-01-01
The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…
Vertical Integration of Geographic Information Sciences: A Recruitment Model for GIS Education
ERIC Educational Resources Information Center
Yu, Jaehyung; Huynh, Niem Tu; McGehee, Thomas Lee
2011-01-01
An innovative vertical integration model for recruiting to GIS education was introduced and tested following four driving forces: curriculum development, GIS presentations, institutional collaboration, and faculty training. Curriculum development was a useful approach to recruitment, student credit hour generation, and retention-rate improvement.…
A step-by-step development of real-size chest model for simulation of thoracoscopic surgery.
Morikawa, Toshiaki; Yamashita, Makoto; Odaka, Makoto; Tsukamoto, Yo; Shibasaki, Takamasa; Mori, Shohei; Asano, Hisatoshi; Akiba, Tadashi
2017-08-01
For the purpose of simulating thoracoscopic surgery, we have conducted stepwise development of a life-like chest model including thorax and intrathoracic organs. First, CT data of the human chest were obtained. First-generation model: based on the CT data, each component of the chest was made from a 3D printer. A hard resin was used for the bony thorax and a rubber-like resin for the vessels and bronchi. Lung parenchyma, muscles and skin were not created. Second-generation model: in addition to the 3D printer, a cast moulding method was used. Each part was casted using a 3D printed master and then assembled. The vasculature and bronchi were casted using silicon resin. The lung parenchyma and mediastinum organs were casted using urethane foam. Chest wall and bony thorax were also casted using a silicon resin. Third-generation model: foamed polyvinyl alcohol (PVA) was newly developed and casted onto the lung parenchyma. The vasculature and bronchi were developed using a soft resin. A PVA plate was made as the mediastinum, and all were combined. The first-generation model showed real distribution of the vasculature and bronchi; it enabled an understanding of the anatomy within the lung. The second-generation model is a total chest dry model, which enabled observation of the total anatomy of the organs and thorax. The third-generation model is a wet organ model. It allowed for realistic simulation of surgical procedures, such as cutting, suturing, stapling and energy device use. This single-use model achieved realistic simulation of thoracoscopic surgery. As the generation advances, the model provides a more realistic simulation of thoracoscopic surgery. Further improvement of the model is needed. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
Atmospheric turbulence simulation for Shuttle orbiter
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1979-01-01
An improved non-recursive model for atmospheric turbulence along the flight path of the Shuttle Orbiter is developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model the time series for both gusts and gust gradients are generated and stored on a series of magnetic tapes. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digital filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digial filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 3 provides a description of the time series as currently recorded on magnetic tape. Conclusions and recommendations are presented in Section 4.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Reconstruction of dynamical systems from resampled point processes produced by neuron models
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Pavlov, Alexey N.
2018-04-01
Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI
NASA Astrophysics Data System (ADS)
Pei, Linmin; Reza, Syed M. S.; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M.
2017-03-01
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. To model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI.
Pei, Linmin; Reza, Syed M S; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M
2017-02-11
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. In order to model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
DOT2: Macromolecular Docking With Improved Biophysical Models
Roberts, Victoria A.; Thompson, Elaine E.; Pique, Michael E.; Perez, Martin S.; Eyck, Lynn Ten
2015-01-01
Computational docking is a useful tool for predicting macromolecular complexes, which are often difficult to determine experimentally. Here we present the DOT2 software suite, an updated version of the DOT intermolecular docking program. DOT2 provides straightforward, automated construction of improved biophysical models based on molecular coordinates, offering checkpoints that guide the user to include critical features. DOT has been updated to run more quickly, allow flexibility in grid size and spacing, and generate a complete list of favorable candidate configu-rations. Output can be filtered by experimental data and rescored by the sum of electrostatic and atomic desolvation energies. We show that this rescoring method improves the ranking of correct complexes for a wide range of macromolecular interactions, and demonstrate that biologically relevant models are essential for biologically relevant results. The flexibility and versatility of DOT2 accommodate realistic models of complex biological systems, improving the likelihood of a successful docking outcome. PMID:23695987
The evaluation and development of the Met Office Unified Model using surface and space borne radar.
NASA Astrophysics Data System (ADS)
Petch, J.
2012-12-01
The Met Office Unified Model is used for the prediction of weather and climate on time scales of hours through to centuries. Therefore, the parametrizations in that model need to work on weather and climate timescale, and with grid-lengths from hundres of meters through to several hundred kilometres. Focusing on the development of the cloud and radiation schemes I will discuss how we are using ground-based remote-sensing observations from Chilbolton (England) and a combination of Cloudsat and Calipso data to evaluate and improve the performance of the model. I will show how the prediction of the clouds has improved since the AR5 version of the model and how we have developed an improved cloud generator to rebresent the sub-grid variability of clouds for radiative transfer.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebersorger, S.; Beigl, P., E-mail: peter.beigl@boku.ac.at
Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions aremore » met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation).« less
Lebersorger, S; Beigl, P
2011-01-01
Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions are met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation). Copyright © 2011 Elsevier Ltd. All rights reserved.
Rational Design of Mouse Models for Cancer Research.
Landgraf, Marietta; McGovern, Jacqui A; Friedl, Peter; Hutmacher, Dietmar W
2018-03-01
The laboratory mouse is widely considered as a valid and affordable model organism to study human disease. Attempts to improve the relevance of murine models for the investigation of human pathologies led to the development of various genetically engineered, xenograft and humanized mouse models. Nevertheless, most preclinical studies in mice suffer from insufficient predictive value when compared with cancer biology and therapy response of human patients. We propose an innovative strategy to improve the predictive power of preclinical cancer models. Combining (i) genomic, tissue engineering and regenerative medicine approaches for rational design of mouse models with (ii) rapid prototyping and computational benchmarking against human clinical data will enable fast and nonbiased validation of newly generated models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
A generative, probabilistic model of local protein structure.
Boomsma, Wouter; Mardia, Kanti V; Taylor, Charles C; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas
2008-07-01
Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
Virtual Design of a Controller for a Hydraulic Cam Phasing System
NASA Astrophysics Data System (ADS)
Schneider, Markus; Ulbrich, Heinz
2010-09-01
Hydraulic vane cam phasing systems are nowadays widely used for improving the performance of combustion engines. At stationary operation, these systems should achieve a constant phasing angle, which however is badly disturbed by the alternating torque generated by the valve actuation. As the hydraulic system shows a non-linear characteristic over the full operation range and the inductivity of the hydraulic pipes generates a significant time delay, a full model based control emerges very complex. Therefore a simple feed-forward controller is designed, bridging the time delay of the hydraulic system and improving the system behaviour significantly.
Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.
2016-01-01
Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.
Langhammer, Martina; Michaelis, Marten; Hoeflich, Andreas; Sobczak, Alexander; Schoen, Jennifer; Weitzel, Joachim M
2014-01-01
Animal models are valuable tools in fertility research. Worldwide, there are more than 400 transgenic or knockout mouse models available showing a reproductive phenotype; almost all of them exhibit an infertile or at least subfertile phenotype. By contrast, animal models revealing an improved fertility phenotype are barely described. This article summarizes data on two outbred mouse models exhibiting a 'high-fertility' phenotype. These mouse lines were generated via selection over a time period of more than 40 years and 161 generations. During this selection period, the number of offspring per litter and the total birth weight of the entire litter nearly doubled. Concomitantly with the increased fertility phenotype, several endocrine parameters (e.g. serum testosterone concentrations in male animals), physiological parameters (e.g. body weight, accelerated puberty, and life expectancy), and behavioral parameters (e.g. behavior in an open field and endurance fitness on a treadmill) were altered. We demonstrate that the two independently bred high-fertility mouse lines warranted their improved fertility phenotype using different molecular and physiological strategies. The fertility lines display female- as well as male-specific characteristics. These genetically heterogeneous mouse models provide new insights into molecular and cellular mechanisms that enhance fertility. In view of decreasing fertility in men, these models will therefore be a precious information source for human reproductive medicine. Translated abstract A German translation of abstract is freely available at http://www.reproduction-online.org/content/147/4/427/suppl/DC1.
Williams, Alwyn; Jordan, Nicholas R; Smith, Richard G; Hunter, Mitchell C; Kammerer, Melanie; Kane, Daniel A; Koide, Roger T; Davis, Adam S
2018-05-31
Climate models predict increasing weather variability, with negative consequences for crop production. Conservation agriculture (CA) may enhance climate resilience by generating certain soil improvements. However, the rate at which these improvements accrue is unclear, and some evidence suggests CA can lower yields relative to conventional systems unless all three CA elements are implemented: reduced tillage, sustained soil cover, and crop rotational diversity. These cost-benefit issues are important considerations for potential adopters of CA. Given that CA can be implemented across a wide variety of regions and cropping systems, more detailed and mechanistic understanding is required on whether and how regionally-adapted CA can improve soil properties while minimizing potential negative crop yield impacts. Across four US states, we assessed short-term impacts of regionally-adapted CA systems on soil properties and explored linkages with maize and soybean yield stability. Structural equation modeling revealed increases in soil organic matter generated by cover cropping increased soil cation exchange capacity, which improved soybean yield stability. Cover cropping also enhanced maize minimum yield potential. Our results demonstrate individual CA elements can deliver rapid improvements in soil properties associated with crop yield stability, suggesting that regionally-adapted CA may play an important role in developing high-yielding, climate-resilient agricultural systems.
weather@home 2: validation of an improved global-regional climate modelling system
NASA Astrophysics Data System (ADS)
Guillod, Benoit P.; Jones, Richard G.; Bowery, Andy; Haustein, Karsten; Massey, Neil R.; Mitchell, Daniel M.; Otto, Friederike E. L.; Sparrow, Sarah N.; Uhe, Peter; Wallom, David C. H.; Wilson, Simon; Allen, Myles R.
2017-05-01
Extreme weather events can have large impacts on society and, in many regions, are expected to change in frequency and intensity with climate change. Owing to the relatively short observational record, climate models are useful tools as they allow for generation of a larger sample of extreme events, to attribute recent events to anthropogenic climate change, and to project changes in such events into the future. The modelling system known as weather@home, consisting of a global climate model (GCM) with a nested regional climate model (RCM) and driven by sea surface temperatures, allows one to generate a very large ensemble with the help of volunteer distributed computing. This is a key tool to understanding many aspects of extreme events. Here, a new version of the weather@home system (weather@home 2) with a higher-resolution RCM over Europe is documented and a broad validation of the climate is performed. The new model includes a more recent land-surface scheme in both GCM and RCM, where subgrid-scale land-surface heterogeneity is newly represented using tiles, and an increase in RCM resolution from 50 to 25 km. The GCM performs similarly to the previous version, with some improvements in the representation of mean climate. The European RCM temperature biases are overall reduced, in particular the warm bias over eastern Europe, but large biases remain. Precipitation is improved over the Alps in summer, with mixed changes in other regions and seasons. The model is shown to represent the main classes of regional extreme events reasonably well and shows a good sensitivity to its drivers. In particular, given the improvements in this version of the weather@home system, it is likely that more reliable statements can be made with regards to impact statements, especially at more localized scales.
Lestina, Jordan; Cook, Maxwell; Kumar, Sunil; Morisette, Jeffrey T.; Ode, Paul J.; Peirs, Frank
2016-01-01
Wheat stem sawfly (Cephus cinctus Norton, Hymenoptera: Cephidae) has long been a significant insect pest of spring, and more recently, winter wheat in the northern Great Plains. Wheat stem sawfly was first observed infesting winter wheat in Colorado in 2010 and, subsequently, has spread rapidly throughout wheat production regions of the state. Here, we used maximum entropy modeling (MaxEnt) to generate habitat suitability maps in order to predict the risk of crop damage as this species spreads throughout the winter wheat-growing regions of Colorado. We identified environmental variables that influence the current distribution of wheat stem sawfly in the state and evaluated whether remotely sensed variables improved model performance. We used presence localities of C. cinctus and climatic, topographic, soils, and normalized difference vegetation index and enhanced vegetation index data derived from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery as environmental variables. All models had high performance in that they were successful in predicting suitable habitat for C. cinctus in its current distribution in eastern Colorado. The enhanced vegetation index for the month of April improved model performance and was identified as a top contributor to MaxEnt model. Soil clay percent at 0–5 cm, temperature seasonality, and precipitation seasonality were also associated with C. cinctus distribution in Colorado. The improved model performance resulting from integrating vegetation indices in our study demonstrates the ability of remote sensing technologies to enhance species distribution modeling. These risk maps generated can assist managers in planning control measures for current infestations and assess the future risk of C. cinctus establishment in currently uninfested regions.
A preliminary estimate of future communications traffic for the electric power system
NASA Technical Reports Server (NTRS)
Barnett, R. M.
1981-01-01
Diverse new generator technologies using renewable energy, and to improve operational efficiency throughout the existing electric power systems are presented. A description of a model utility and the information transfer requirements imposed by incorporation of dispersed storage and generation technologies and implementation of more extensive energy management are estimated. An example of possible traffic for an assumed system, and an approach that can be applied to other systems, control configurations, or dispersed storage and generation penetrations is provided.
Genomic prediction in a nuclear population of layers using single-step models.
Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning
2018-02-01
Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Kurtzman, Gary
2005-01-01
Venture capital has tended to shy away from diagnostics companies, whose products are not predicated on the blockbuster model of pharmaceuticals. But several new diagnostics companies are developing products that hold immense potential to improve healthcare delivery. Here’s why venture investors should take another look at the diagnostics area. PMID:23424311
ERIC Educational Resources Information Center
Ullman, Michael T.; Lovelett, Jarrett T.
2018-01-01
The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…
In Vitro Tissue-Engineered Skeletal Muscle Models for Studying Muscle Physiology and Disease.
Khodabukus, Alastair; Prabhu, Neel; Wang, Jason; Bursac, Nenad
2018-04-25
Healthy skeletal muscle possesses the extraordinary ability to regenerate in response to small-scale injuries; however, this self-repair capacity becomes overwhelmed with aging, genetic myopathies, and large muscle loss. The failure of small animal models to accurately replicate human muscle disease, injury and to predict clinically-relevant drug responses has driven the development of high fidelity in vitro skeletal muscle models. Herein, the progress made and challenges ahead in engineering biomimetic human skeletal muscle tissues that can recapitulate muscle development, genetic diseases, regeneration, and drug response is discussed. Bioengineering approaches used to improve engineered muscle structure and function as well as the functionality of satellite cells to allow modeling muscle regeneration in vitro are also highlighted. Next, a historical overview on the generation of skeletal muscle cells and tissues from human pluripotent stem cells, and a discussion on the potential of these approaches to model and treat genetic diseases such as Duchenne muscular dystrophy, is provided. Finally, the need to integrate multiorgan microphysiological systems to generate improved drug discovery technologies with the potential to complement or supersede current preclinical animal models of muscle disease is described. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Smith, Lyndon N.; Smith, Melvyn L.
2000-10-01
Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.
Systems, methods and apparatus for pattern matching in procedure development and verification
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which, in some embodiments, a formal specification is pattern-matched from scenarios, the formal specification is analyzed, and flaws in the formal specification are corrected. The systems, methods and apparatus may include pattern-matching an equivalent formal model from an informal specification. Such a model can be analyzed for contradictions, conflicts, use of resources before the resources are available, competition for resources, and so forth. From such a formal model, an implementation can be automatically generated in a variety of notations. The approach can improve the resulting implementation, which, in some embodiments, is provably equivalent to the procedures described at the outset, which in turn can improve confidence that the system reflects the requirements, and in turn reduces system development time and reduces the amount of testing required of a new system. Moreover, in some embodiments, two or more implementations can be "reversed" to appropriate formal models, the models can be combined, and the resulting combination checked for conflicts. Then, the combined, error-free model can be used to generate a new (single) implementation that combines the functionality of the original separate implementations, and may be more likely to be correct.
Reactive Power Compensation Method Considering Minimum Effective Reactive Power Reserve
NASA Astrophysics Data System (ADS)
Gong, Yiyu; Zhang, Kai; Pu, Zhang; Li, Xuenan; Zuo, Xianghong; Zhen, Jiao; Sudan, Teng
2017-05-01
According to the calculation model of minimum generator reactive power reserve of power system voltage stability under the premise of the guarantee, the reactive power management system with reactive power compensation combined generator, the formation of a multi-objective optimization problem, propose a reactive power reserve is considered the minimum generator reactive power compensation optimization method. This method through the improvement of the objective function and constraint conditions, when the system load growth, relying solely on reactive power generation system can not meet the requirement of safe operation, increase the reactive power reserve to solve the problem of minimum generator reactive power compensation in the case of load node.
Bierer, S Beth; Dannefer, Elaine F; Tetzlaff, John E
2015-09-01
Remediation in the era of competency-based assessment demands a model that empowers students to improve performance. To examine a remediation model where students, rather than faculty, develop remedial plans to improve performance. Private medical school, 177 medical students. A promotion committee uses student-generated portfolios and faculty referrals to identify struggling students, and has them develop formal remediation plans with personal reflections, improvement strategies, and performance evidence. Students submit reports to document progress until formally released from remediation by the promotion committee. Participants included 177 students from six classes (2009-2014). Twenty-six were placed in remediation, with more referrals occurring during Years 1 or 2 (n = 20, 76 %). Unprofessional behavior represented the most common reason for referral in Years 3-5. Remedial students did not differ from classmates (n = 151) on baseline characteristics (Age, Gender, US citizenship, MCAT) or willingness to recommend their medical school to future students (p < 0.05). Two remedial students did not graduate and three did not pass USLME licensure exams on first attempt. Most remedial students (92 %) generated appropriate plans to address performance deficits. Students can successfully design remedial interventions. This learner-driven remediation model promotes greater autonomy and reinforces self-regulated learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oribe-Garcia, Iraia, E-mail: iraia.oribe@deusto.es; Kamara-Esteban, Oihane; Martin, Cristina
Highlights: • We have modelled household waste generation in Biscay municipalities. • We have identified relevant characteristics regarding household waste generation. • Factor models are used in order to identify the best subset of explicative variables. • Biscay’s municipalities are grouped by means of hierarchical clustering. - Abstract: The planning of waste management strategies needs tools to support decisions at all stages of the process. Accurate quantification of the waste to be generated is essential for both the daily management (short-term) and proper design of facilities (long-term). Designing without rigorous knowledge may have serious economic and environmental consequences. The presentmore » works aims at identifying relevant socio-economic features of municipalities regarding Household Waste (HW) generation by means of factor models. Factor models face two main drawbacks, data collection and identifying relevant explanatory variables within a heterogeneous group. Grouping similar characteristics observations within a group may favour the deduction of more robust models. The methodology followed has been tested with Biscay Province because it stands out for having very different municipalities ranging from very rural to urban ones. Two main models are developed, one for the overall province and a second one after clustering the municipalities. The results prove that relating municipalities with specific characteristics, improves the results in a very heterogeneous situation. The methodology has identified urban morphology, tourism activity, level of education and economic situation as the most influencing characteristics in HW generation.« less
More than Anecdotes: Fishers’ Ecological Knowledge Can Fill Gaps for Ecosystem Modeling
Bevilacqua, Ana Helena V.; Carvalho, Adriana R.; Angelini, Ronaldo; Christensen, Villy
2016-01-01
Background Ecosystem modeling applied to fisheries remains hampered by a lack of local information. Fishers’ knowledge could fill this gap, improving participation in and the management of fisheries. Methodology The same fishing area was modeled using two approaches: based on fishers’ knowledge and based on scientific information. For the former, the data was collected by interviews through the Delphi methodology, and for the latter, the data was gathered from the literature. Agreement between the attributes generated by the fishers’ knowledge model and scientific model is discussed and explored, aiming to improve data availability, the ecosystem model, and fisheries management. Principal Findings The ecosystem attributes produced from the fishers’ knowledge model were consistent with the ecosystem attributes produced by the scientific model, and elaborated using only the scientific data from literature. Conclusions/Significance This study provides evidence that fishers’ knowledge may suitably complement scientific data, and may improve the modeling tools for the research and management of fisheries. PMID:27196131
NASA Astrophysics Data System (ADS)
Zhang, J.; Chen, Z.; Cheng, C.; Wang, Y. X.
2017-10-01
A phase field crystal (PFC) model is employed to study morphology evolution of nanoheteroepitaxy and misfit dislocation generation when applied with enhanced supercooling, lattice mismatch and substrate vicinal angle conditions. Misfit strain that rises due to lattice mismatch causes rough surfaces or misfit dislocations, deteriorates film properties, hence, efforts taken to reveal their microscopic mechanism are significant for film quality improvement. Uniform islands, instead of misfit dislocations, are developed in subcritical thickness film, serving as a way of strain relief by surface mechanism. Misfit dislocations generate when strain relief by surface mechanism is deficient in higher supercooling, multilayers of misfit dislocations dominate, but the number of layers reduces gradually when the supercooling is further enhanced. Rough surfaces like islands or cuspate pits are developed which is ascribed to lattice mismatch, multilayers of misfit dislocations generate to further enhance lattice mismatch. Layers of misfit dislocations generate at a thickening position at enhanced substrate vicinal angle, this further enhancing the angle leading to sporadic generation of misfit dislocations.
Increasing power generation in horizontal axis wind turbines using optimized flow control
NASA Astrophysics Data System (ADS)
Cooney, John A., Jr.
In order to effectively realize future goals for wind energy, the efficiency of wind turbines must increase beyond existing technology. One direct method for achieving increased efficiency is by improving the individual power generation characteristics of horizontal axis wind turbines. The potential for additional improvement by traditional approaches is diminishing rapidly however. As a result, a research program was undertaken to assess the potential of using distributed flow control to increase power generation. The overall objective was the development of validated aerodynamic simulations and flow control approaches to improve wind turbine power generation characteristics. BEM analysis was conducted for a general set of wind turbine models encompassing last, current, and next generation designs. This analysis indicated that rotor lift control applied in Region II of the turbine power curve would produce a notable increase in annual power generated. This was achieved by optimizing induction factors along the rotor blade for maximum power generation. In order to demonstrate this approach and other advanced concepts, the University of Notre Dame established the Laboratory for Enhanced Wind Energy Design (eWiND). This initiative includes a fully instrumented meteorological tower and two pitch-controlled wind turbines. The wind turbines are representative in their design and operation to larger multi-megawatt turbines, but of a scale that allows rotors to be easily instrumented and replaced to explore new design concepts. Baseline data detailing typical site conditions and turbine operation is presented. To realize optimized performance, lift control systems were designed and evaluated in CFD simulations coupled with shape optimization tools. These were integrated into a systematic design methodology involving BEM simulations, CFD simulations and shape optimization, and selected experimental validation. To refine and illustrate the proposed design methodology, a complete design cycle was performed for the turbine model incorporated in the wind energy lab. Enhanced power generation was obtained through passive trailing edge shaping aimed at reaching lift and lift-to-drag goals predicted to optimize performance. These targets were determined by BEM analysis to improve power generation characteristics and annual energy production (AEP) for the wind turbine. A preliminary design was validated in wind tunnel experiments on a 2D rotor section in preparation for testing in the full atmospheric environment of the eWiND Laboratory. These tests were performed for the full-scale geometry and atmospheric conditions. Upon making additional improvements to the shape optimization tools, a series of trailing edge additions were designed to optimize power generation. The trailing edge additions were predicted to increase the AEP by up to 4.2% at the White Field site. The pieces were rapid-prototyped and installed on the wind turbine in March, 2014. Field tests are ongoing.
Next generation of weather generators on web service framework
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.
2016-12-01
Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.
ERIC Educational Resources Information Center
Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji
2016-01-01
This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…
Rethinking the Default Construction of Multimodel Climate Ensembles
Rauser, Florian; Gleckler, Peter; Marotzke, Jochem
2015-07-21
Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less
Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model
NASA Astrophysics Data System (ADS)
Kim, Sangjo; Kim, Kuisoon; Son, Changmin
2018-04-01
An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Pointer, William David; Baglietto, Emilio
2016-05-01
Here, in the effort to reinvigorate innovation in the way we design, build, and operate the nuclear power generating stations of today and tomorrow, nothing can be taken for granted. Not even the seemingly familiar physics of boiling water. The Consortium for the Advanced Simulation of Light Water Reactors, or CASL, is focused on the deployment of advanced modeling and simulation capabilities to enable the nuclear industry to reduce uncertainties in the prediction of multi-physics phenomena and continue to improve the performance of today’s Light Water Reactors and their fuel. An important part of the CASL mission is the developmentmore » of a next generation thermal hydraulics simulation capability, integrating the history of engineering models based on experimental experience with the computing technology of the future.« less
Seismic modeling of Earth's 3D structure: Recent advancements
NASA Astrophysics Data System (ADS)
Ritsema, J.
2008-12-01
Global models of Earth's seismic structure continue to improve due to the growth of seismic data sets, implementation of advanced wave propagations theories, and increased computational power. In my presentation, I will summarize seismic tomography results from the past 5-10 years. I will compare the most recent P and S velocity models, discuss model resolution and model interpretation, and present an, admittedly biased, list of research directions required to develop the next generation 3D models.
NASA Astrophysics Data System (ADS)
Aksoy, A.; Lee, J. H.; Kitanidis, P. K.
2016-12-01
Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.
Further experimentation on bubble generation during transformer overload. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oommen, T.V.
1992-03-01
This report covers additional work done during 1990 and 1991 on gas bubble generation under overload conditions. To improve visual bubble detection, a single disc coil was used. To further improve detection, a corona device was also used which signaled the onset of corona activity in the early stages of bubble formation. A total of fourteen model tests were conducted, half of which used the Inertaire system, and the remaining, a conservator (COPS). Moisture content of paper in the coil varied from 1.0% to 8.0%; gas (nitrogen) content varied from 1.0% to 8.8%. The results confirmed earlier observations that themore » mathematical bubble prediction model was not valid for high gas content model with relatively low moisture levels in the coil. An empirical relationship was formulated to accurately predict bubble evolution temperatures from known moisture and gas content values. For low moisture content models (below 2%), the simple Piper relationship was sufficient to predict bubble evolution temperatures, regardless of gas content. Moisture in the coil appears to be the key factor in bubble generation. Gas blanketed (Inertaire) systems do not appear to be prone to premature bubble generation from overloads as previously thought. The new bubble prediction model reveals that for a coil with 2% moisture, the bubble evolution temperature would be about 140{degrees}C. Since old transformers in service may have as much as 2% moisture in paper, the 140{degrees}C bubble evolution temperature may be taken as the lower limit of bubble evolution temperature under overload conditions for operating transformers. Drier insulation would raise the bubble evolution temperature.« less
Enhanced backgrounds in scene rendering with GTSIMS
NASA Astrophysics Data System (ADS)
Prussing, Keith F.; Pierson, Oliver; Cordell, Chris; Stewart, John; Nielson, Kevin
2018-05-01
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
Development of a conceptual model of cancer caregiver health literacy.
Yuen, E Y N; Dodson, S; Batterham, R W; Knight, T; Chirgwin, J; Livingston, P M
2016-03-01
Caregivers play a vital role in caring for people diagnosed with cancer. However, little is understood about caregivers' capacity to find, understand, appraise and use information to improve health outcomes. The study aimed to develop a conceptual model that describes the elements of cancer caregiver health literacy. Six concept mapping workshops were conducted with 13 caregivers, 13 people with cancer and 11 healthcare providers/policymakers. An iterative, mixed methods approach was used to analyse and synthesise workshop data and to generate the conceptual model. Six major themes and 17 subthemes were identified from 279 statements generated by participants during concept mapping workshops. Major themes included: access to information, understanding of information, relationship with healthcare providers, relationship with the care recipient, managing challenges of caregiving and support systems. The study extends conceptualisations of health literacy by identifying factors specific to caregiving within the cancer context. The findings demonstrate that caregiver health literacy is multidimensional, includes a broad range of individual and interpersonal elements, and is influenced by broader healthcare system and community factors. These results provide guidance for the development of: caregiver health literacy measurement tools; strategies for improving health service delivery, and; interventions to improve caregiver health literacy. © 2015 John Wiley & Sons Ltd.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
NASA Astrophysics Data System (ADS)
Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.
2016-06-01
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.
Simulation of demand management and grid balancing with electric vehicles
NASA Astrophysics Data System (ADS)
Druitt, James; Früh, Wolf-Gerrit
2012-10-01
This study investigates the potential role of electric vehicles in an electricity network with a high contribution from variable generation such as wind power. Electric vehicles are modelled to provide demand management through flexible charging requirements and energy balancing for the network. Balancing applications include both demand balancing and vehicle-to-grid discharging. This study is configured to represent the UK grid with balancing requirements derived from wind generation calculated from weather station wind speeds on the supply side and National Grid data from on the demand side. The simulation models 1000 individual vehicle entities to represent the behaviour of larger numbers of vehicles. A stochastic trip generation profile is used to generate realistic journey characteristics, whilst a market pricing model allows charging and balancing decisions to be based on realistic market price conditions. The simulation has been tested with wind generation capacities representing up to 30% of UK consumption. Results show significant improvements to load following conditions with the introduction of electric vehicles, suggesting that they could substantially facilitate the uptake of intermittent renewable generation. Electric vehicle owners would benefit from flexible charging and selling tariffs, with the majority of revenue derived from vehicle-to-grid participation in balancing markets.
Nucleic acid reactivity: challenges for next-generation semiempirical quantum models.
Huang, Ming; Giese, Timothy J; York, Darrin M
2015-07-05
Semiempirical quantum models are routinely used to study mechanisms of RNA catalysis and phosphoryl transfer reactions using combined quantum mechanical (QM)/molecular mechanical methods. Herein, we provide a broad assessment of the performance of existing semiempirical quantum models to describe nucleic acid structure and reactivity to quantify their limitations and guide the development of next-generation quantum models with improved accuracy. Neglect of diatomic differential overlap and self-consistent density-functional tight-binding semiempirical models are evaluated against high-level QM benchmark calculations for seven biologically important datasets. The datasets include: proton affinities, polarizabilities, nucleobase dimer interactions, dimethyl phosphate anion, nucleoside sugar and glycosidic torsion conformations, and RNA phosphoryl transfer model reactions. As an additional baseline, comparisons are made with several commonly used density-functional models, including M062X and B3LYP (in some cases with dispersion corrections). The results show that, among the semiempirical models examined, the AM1/d-PhoT model is the most robust at predicting proton affinities. AM1/d-PhoT and DFTB3-3ob/OPhyd reproduce the MP2 potential energy surfaces of 6 associative RNA phosphoryl transfer model reactions reasonably well. Further, a recently developed linear-scaling "modified divide-and-conquer" model exhibits the most accurate results for binding energies of both hydrogen bonded and stacked nucleobase dimers. The semiempirical models considered here are shown to underestimate the isotropic polarizabilities of neutral molecules by approximately 30%. The semiempirical models also fail to adequately describe torsion profiles for the dimethyl phosphate anion, the nucleoside sugar ring puckers, and the rotations about the nucleoside glycosidic bond. The modeling of pentavalent phosphorus, particularly with thio substitutions often used experimentally as mechanistic probes, was problematic for all of the models considered. Analysis of the strengths and weakness of the models suggests that the creation of robust next-generation models should emphasize the improvement of relative conformational energies and barriers, and nonbonded interactions. © 2015 Wiley Periodicals, Inc.
Nucleic acid reactivity : challenges for next-generation semiempirical quantum models
Huang, Ming; Giese, Timothy J.; York, Darrin M.
2016-01-01
Semiempirical quantum models are routinely used to study mechanisms of RNA catalysis and phosphoryl transfer reactions using combined quantum mechanical/molecular mechanical methods. Herein, we provide a broad assessment of the performance of existing semiempirical quantum models to describe nucleic acid structure and reactivity in order to quantify their limitations and guide the development of next-generation quantum models with improved accuracy. Neglect of diatomic diffierential overlap (NDDO) and self-consistent density-functional tight-binding (SCC-DFTB) semiempirical models are evaluated against high-level quantum mechanical benchmark calculations for seven biologically important data sets. The data sets include: proton affinities, polarizabilities, nucleobase dimer interactions, dimethyl phosphate anion, nucleoside sugar and glycosidic torsion conformations, and RNA phosphoryl transfer model reactions. As an additional baseline, comparisons are made with several commonly used density-functional models, including M062X and B3LYP (in some cases with dispersion corrections). The results show that, among the semiempirical models examined, the AM1/d-PhoT model is the most robust at predicting proton affinities. AM1/d-PhoT and DFTB3-3ob/OPhyd reproduce the MP2 potential energy surfaces of 6 associative RNA phosphoryl transfer model reactions reasonably well. Further, a recently developed linear-scaling “modified divide-and-conquer” model exhibits the most accurate results for binding energies of both hydrogen bonded and stacked nucleobase dimers. The semiempirical models considered here are shown to underestimate the isotropic polarizabilities of neutral molecules by approximately 30%. The semiempirical models also fail to adequately describe torsion profiles within the dimethyl phosphate anion, the nucleoside sugar ring puckers, and the rotations about the nucleoside glycosidic bond. The modeling of pentavalent phosphorus, particularly with thio substitutions often used experimentally as mechanistic probes, was problematic for all of the models considered. Analysis of the strengths and weakness of the models suggest that the creation of robust next-generation models should emphasize the improvement of relative conformational energies and barriers, and nonbond interactions. PMID:25943338
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
Process improvement as an investment: Measuring its worth
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Jeletic, Kellyann
1993-01-01
This paper discusses return on investment (ROI) generated from software process improvement programs. It details the steps needed to compute ROI and compares these steps from the perspective of two process improvement approaches: the widely known Software Engineering Institute's capability maturity model and the approach employed by NASA's Software Engineering Laboratory (SEL). The paper then describes the specific investments made in the SEL over the past 18 years and discusses the improvements gained from this investment by the production organization in the SEL.
NASA Astrophysics Data System (ADS)
Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi
2017-05-01
Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.
NASA Astrophysics Data System (ADS)
Farkas, C. M.; Moeller, M.; Carlton, A. G.
2013-12-01
Photochemical transport models routinely under predict peak air quality events. This deficiency may be due, in part, to inadequate temporalization of emissions from the electric generating sector. The National Emissions Inventory (NEI) reports emissions from Electric Generating Units (EGUs) by either Continuous Emission Monitors (CEMs) that report hourly values or as an annual total. The Sparse Matrix Operator Kernel Emissions preprocessor (SMOKE), used to prepare emissions data for modeling with the CMAQ air quality model, allocates annual emission totals throughout the year using specific monthly, weekly, and hourly weights according to standard classification code (SCC) and location. This approach represents average diurnal and seasonal patterns of electricity generation but does not capture spikes in emissions due to episodic use as with peaking units or due to extreme weather events. In this project we use a combination of state air quality permits, CEM data, and EPA emission factors to more accurately temporalize emissions of NOx, SO2 and particulate matter (PM) during the extensive heat wave of July and August 2006. Two CMAQ simulations are conducted; the first with the base NEI emissions and the second with improved temporalization, more representative of actual emissions during the heat wave. Predictions from both simulations are evaluated with O3 and PM measurement data from EPA's National Air Monitoring Stations (NAMS) and State and Local Air Monitoring Stations (SLAMS) during the heat wave, for which ambient concentrations of criteria pollutants were often above NAAQS. During periods of increased photochemistry and high pollutant concentrations, it is critical that emissions are most accurately represented in air quality models.
Generation and Evolution of Internal Waves in Luzon Strait
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Generation and Evolution of Internal Waves in Luzon...inertial waves , nonlinear internal waves (NLIWs), and turbulence mixing––in the ocean and thereby help develop improved parameterizations of mixing for...ocean models. Mixing within the stratified ocean is a particular focus as the complex interplay of internal waves from a variety of sources and
Generation and Evolution of Internal Waves in Luzon Strait
2016-03-01
1 DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. Generation and Evolution of Internal Waves in...internal tides, inertial waves , nonlinear internal waves (NLIWs), and turbulence mixing––in the ocean and thereby help develop improved parameterizations of...mixing for ocean models. Mixing within the stratified ocean is a particular focus as the complex interplay of internal waves from a variety of
Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.
ERIC Educational Resources Information Center
Thomas, Neal; Gan, Nianci
1997-01-01
Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…
AgMIP: Next Generation Models and Assessments
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2014-12-01
Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6 that involves the key modeling groups from around the world including North America, Europe, South America, Sub-Saharan Africa, South Asia, East Asia, and Australia and Oceania. This community process will lead to mutually agreed protocols for coordinated global and regional assessments.
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
NASA Astrophysics Data System (ADS)
Morales, Y.; Olivares, M. A.; Vargas, X.
2015-12-01
This research aims to improve the representation of stochastic water inflows to hydropower plants used in a grid-wide, power production scheduling model in central Chile. The model prescribes the operation of every plant in the system, including hydropower plants located in several basins, and uses stochastic dual dynamic programming (SDDP) with possible inflow scenarios defined from historical records. Each year of record is treated as a sample of weekly inflows to power plants, assuming this intrinsically incorporates spatial and temporal correlations, without any further autocorrelation analysis of the hydrological time series. However, standard good practice suggests the use of synthetic flows instead of raw historical records.The proposed approach generates synthetic inflow scenarios based on hydrological modeling of a few basins in the system and transposition of flows with other basins within so-called homogeneous zones. Hydrologic models use precipitation and temperature as inputs, and therefore this approach requires producing samples of those variables. Development and calibration of these models imply a greater demand of time compared to the purely statistical approach to synthetic flows. This approach requires consideration of the main uses in the basins: agriculture and hydroelectricity. Moreover a geostatistical analysis of the area is analyzed to generate a map that identifies the relationship between the points where the hydrological information is generated and other points of interest within the power system. Consideration of homogeneous zones involves a decrease in the effort required for generation of information compared with hydrological modeling of every point of interest. It is important to emphasize that future scenarios are derived through a probabilistic approach that incorporates the features of the hydrological year type (dry, normal or wet), covering the different possibilities in terms of availability of water resources. We present the results for Maule basin in Chile's Central Interconnected System (SIC).
Firoozi, Ali Akbar; Taha, Mohd Raihan; Mir Moammad Hosseini, S M; Firoozi, Ali Asghar
2014-01-01
Deformation of quay walls is one of the main sources of damage to port facility while liquefaction of backfill and base soil of the wall are the main reasons for failures of quay walls. During earthquakes, the most susceptible materials for liquefaction in seashore regions are loose saturated sand. In this study, effects of enhancing the wall width and the soil improvement on the behavior of gravity quay walls are examined in order to obtain the optimum improved region. The FLAC 2D software was used for analyzing and modeling progressed models of soil and loading under difference conditions. Also, the behavior of liquefiable soil is simulated by the use of "Finn" constitutive model in the analysis models. The "Finn" constitutive model is especially created to determine liquefaction phenomena and excess pore pressure generation.
Taha, Mohd Raihan; Mir Moammad Hosseini, S. M.
2014-01-01
Deformation of quay walls is one of the main sources of damage to port facility while liquefaction of backfill and base soil of the wall are the main reasons for failures of quay walls. During earthquakes, the most susceptible materials for liquefaction in seashore regions are loose saturated sand. In this study, effects of enhancing the wall width and the soil improvement on the behavior of gravity quay walls are examined in order to obtain the optimum improved region. The FLAC 2D software was used for analyzing and modeling progressed models of soil and loading under difference conditions. Also, the behavior of liquefiable soil is simulated by the use of “Finn” constitutive model in the analysis models. The “Finn” constitutive model is especially created to determine liquefaction phenomena and excess pore pressure generation. PMID:25126595
Optimizing Chemotherapy Dose and Schedule by Norton-Simon Mathematical Modeling
Traina, Tiffany A.; Dugan, Ute; Higgins, Brian; Kolinsky, Kenneth; Theodoulou, Maria; Hudis, Clifford A.; Norton, Larry
2011-01-01
Background To hasten and improve anticancer drug development, we created a novel approach to generating and analyzing preclinical dose-scheduling data so as to optimize benefit-to-toxicity ratios. Methods We applied mathematical methods based upon Norton-Simon growth kinetic modeling to tumor-volume data from breast cancer xenografts treated with capecitabine (Xeloda®, Roche) at the conventional schedule of 14 days of treatment followed by a 7-day rest (14 - 7). Results The model predicted that 7 days of treatment followed by a 7-day rest (7 - 7) would be superior. Subsequent preclinical studies demonstrated that this biweekly capecitabine schedule allowed for safe delivery of higher daily doses, improved tumor response, and prolonged animal survival. Conclusions We demonstrated that the application of Norton-Simon modeling to the design and analysis of preclinical data predicts an improved capecitabine dosing schedule in xenograft models. This method warrants further investigation and application in clinical drug development. PMID:20519801
Building alternate protein structures using the elastic network model.
Yang, Qingyi; Sharp, Kim A
2009-02-15
We describe a method for efficiently generating ensembles of alternate, all-atom protein structures that (a) differ significantly from the starting structure, (b) have good stereochemistry (bonded geometry), and (c) have good steric properties (absence of atomic overlap). The method uses reconstruction from a series of backbone framework structures that are obtained from a modified elastic network model (ENM) by perturbation along low-frequency normal modes. To ensure good quality backbone frameworks, the single force parameter ENM is modified by introducing two more force parameters to characterize the interaction between the consecutive carbon alphas and those within the same secondary structure domain. The relative stiffness of the three parameters is parameterized to reproduce B-factors, while maintaining good bonded geometry. After parameterization, violations of experimental Calpha-Calpha distances and Calpha-Calpha-Calpha pseudo angles along the backbone are reduced to less than 1%. Simultaneously, the average B-factor correlation coefficient improves to R = 0.77. Two applications illustrate the potential of the approach. (1) 102,051 protein backbones spanning a conformational space of 15 A root mean square deviation were generated from 148 nonredundant proteins in the PDB database, and all-atom models with minimal bonded and nonbonded violations were produced from this ensemble of backbone structures using the SCWRL side chain building program. (2) Improved backbone templates for homology modeling. Fifteen query sequences were each modeled on two targets. For each of the 30 target frameworks, dozens of improved templates could be produced In all cases, improved full atom homology models resulted, of which 50% could be identified blind using the D-Fire statistical potential. (c) 2008 Wiley-Liss, Inc.
Improvements on NYMTC Data Products
DOT National Transportation Integrated Search
2009-11-11
Just like any other scientific research field, the value of data quality is undisputed in the field of transportation. From policy planning to performance evaluation, from model development to impact studies, good quality data is essential to generat...
Scaling depth-induced wave-breaking in two-dimensional spectral wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.
2015-03-01
Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abul Ehsan Bhuiyan, Md; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Quintana-Seguí, Pere; Barella-Ortiz, Anaïs
2018-02-01
This study investigates the use of a nonparametric, tree-based model, quantile regression forests (QRF), for combining multiple global precipitation datasets and characterizing the uncertainty of the combined product. We used the Iberian Peninsula as the study area, with a study period spanning 11 years (2000-2010). Inputs to the QRF model included three satellite precipitation products, CMORPH, PERSIANN, and 3B42 (V7); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset. We calibrated the QRF model for two seasons and two terrain elevation categories and used it to generate ensemble for these conditions. Evaluation of the combined product was based on a high-resolution, ground-reference precipitation dataset (SAFRAN) available at 5 km 1 h-1 resolution. Furthermore, to evaluate relative improvements and the overall impact of the combined product in hydrological response, we used the generated ensemble to force a distributed hydrological model (the SURFEX land surface model and the RAPID river routing scheme) and compared its streamflow simulation results with the corresponding simulations from the individual global precipitation and reference datasets. We concluded that the proposed technique could generate realizations that successfully encapsulate the reference precipitation and provide significant improvement in streamflow simulations, with reduction in systematic and random error on the order of 20-99 and 44-88 %, respectively, when considering the ensemble mean.
Initialization and Setup of the Coastal Model Test Bed: STWAVE
2017-01-01
Laboratory (CHL) Field Research Facility (FRF) in Duck , NC. The improved evaluation methodology will promote rapid enhancement of model capability and focus...Blanton 2008) study . This regional digital elevation model (DEM), with a cell size of 10 m, was generated from numerous datasets collected at different...INFORMATION: For additional information, contact Spicer Bak, Coastal Observation and Analysis Branch, Coastal and Hydraulics Laboratory, 1261 Duck Road
Baby boomers nearing retirement: the healthiest generation?
Rice, Neil E; Lang, Iain A; Henley, William; Melzer, David
2010-02-01
The baby-boom generation is entering retirement. Having experienced unprecedented prosperity and improved medical technology, they should be the healthiest generation ever. We compared prevalence of disease and risk factors at ages 50-61 years in baby boomers with the preceding generation and attributed differences to period or cohort effects. Data were from the Health Survey for England (HSE) from 1994 to 2007 (n = 48,563). Logistic regression models compared health status between birth cohorts. Age-period-cohort models identified cohort and period effects separately. Compared to the wartime generation, the baby-boomer group was heavier (3.02 kg; 95% confidence interval [CI], 2.42-3.63; p < 0.001) and reported more diagnoses of hypertension (odds ratio [OR] = 1.48; CI, 1.27-1.72; p < 0.001), diabetes (OR = 1.71; CI, 1.37-2.12; p < 0.001), and mental illness (OR = 1.90; CI, 1.54-2.53; p < 0.001). Baby boomers reported fewer heart attacks (OR = 0.61; CI, 0.47-0.79; p < 0.001) and had lower measured blood pressures (systolic -9.51 mmHg; CI, -8.7 to -10.31; p <0.001; diastolic, -2.5 mmHg; CI, -1.99 to -3.01; p < 0.001). Higher diagnosed mental disorder prevalence was attributable to a cohort effect (prevalence for 1935-1939 cohort, 2.5%, vs.1950-1954 cohort, 4.7%), whereas changes in diagnoses of diabetes and hypertension and measured body mass index were primarily period effects. English baby boomers are moving toward retirement with improved cardiovascular health. However, the baby-boomer cohort has a higher prevalence of mental illness diagnoses and shows no improvement in self-rated health compared to the wartime birth cohort. There remains substantial scope to reduce health risks and future disability.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
Measurement and Modeling of Blocking Contacts for Cadmium Telluride Gamma Ray Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, Patrick R.
2010-01-07
Gamma ray detectors are important in national security applications, medicine, and astronomy. Semiconductor materials with high density and atomic number, such as Cadmium Telluride (CdTe), offer a small device footprint, but their performance is limited by noise at room temperature; however, improved device design can decrease detector noise by reducing leakage current. This thesis characterizes and models two unique Schottky devices: one with an argon ion sputter etch before Schottky contact deposition and one without. Analysis of current versus voltage characteristics shows that thermionic emission alone does not describe these devices. This analysis points to reverse bias generation current ormore » leakage through an inhomogeneous barrier. Modeling the devices in reverse bias with thermionic field emission and a leaky Schottky barrier yields good agreement with measurements. Also numerical modeling with a finite-element physics-based simulator suggests that reverse bias current is a combination of thermionic emission and generation. This thesis proposes further experiments to determine the correct model for reverse bias conduction. Understanding conduction mechanisms in these devices will help develop more reproducible contacts, reduce leakage current, and ultimately improve detector performance.« less
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
Thermal Analysis of LED Phosphor Layer
NASA Astrophysics Data System (ADS)
Perera, Ukwatte Lokuliyanage Indika Upendra
Solid-state lighting technology has progressed to a level where light-emitting diode (LED) products are either on par or better than their traditional lighting technology counterparts with respect to efficacy and lifetime. At present, the most common method to create "white" light from LEDs for illumination applications is by using the LED primary radiation and wavelength-converting materials. In this method, the re-emission from the wavelength-converting materials excited by the LED primary radiation is combined with the LED primary radiation to create the "white" light. During this conversion process, heat is generated as a result of conversion inefficiencies and other loss mechanisms in the LED and the wavelength-converting materials. This generated heat, if not properly dissipated, increases the operating temperature, thereby increasing the light output degradation of the system over both the short and long term. The heat generation of the LED and thermal management of the LED have been studied extensively. Methods to effectively dissipate heat from the LEDs and maintain lower LED operating temperature are well understood. However, investigation of factors driving heat generation, the resulting temperature distribution in the phosphor layer, and the influence of the phosphor layer temperature on LED performance and reliability have not received the same focus. The goal of this dissertation was to understand the main factors driving heat and light generation and the transport of light and heat in the wavelength-converting layer of an LED system. Another goal was to understand the interaction between heat and light in the system and to develop and analyze a solution to reduce the wavelength-converting layer operating temperature, thereby improving light output and reliability. Even though past studies have explored generation and transfer separately for light and heat, to the best of the author's knowledge, this is the first study that has analyzed both factors simultaneously to optimize the performance of a phosphor-converted LED system, thus contributing new knowledge to the field. In this dissertation, a theoretical model was developed that modeled both light propagation and heat transfer in the wavelength-converting layer for identifying the factors influencing heat generation. This theoretical model included temperature-dependent phosphor efficiency and light absorption in the phosphor layer geometry. Experimental studies were used to validate the developed model. The model indicated good agreement with the experimental results. The developed theoretical model was then used to model experimental studies. These experiment results were compared with the model predicted results for total radiant power output of LED systems and phosphor layer surface temperature. These comparisons illustrated the effectiveness of a dedicated heat dissipation method in reducing the operating temperature of the wavelength-converting layer, and the contribution of different heat dissipation mechanisms were quantified using the developed numerical model. In addition to these short-term studies, an experiment was conducted to validate the effectiveness of the dedicated wavelength-converting heat sink design to improve system lifetime by reducing phosphor layer operating temperature. The proposed heat sink design decreased the operating temperature of the phosphor layer by ~10°C, improving lifetime by twofold. Finally, this dissertation investigated the potential of the developed theoretical model being used as a tool for prioritizing research tasks and as a design tool during the material selection and system configuration phases.
Effective visibility analysis method in virtual geographic environment
NASA Astrophysics Data System (ADS)
Li, Yi; Zhu, Qing; Gong, Jianhua
2008-10-01
Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.
Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B
2007-03-01
The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.
2014-06-01
systems. It can model systems including both conventional, diesel powered generators and renewable power sources such as photovoltaic arrays and wind...conducted an experiment where he assessed the capabilities of the HOMER model in forecasting the power output of a solar panel at NPS [32]. In his ex...energy efficiency in expeditionary operations, the HOMER micropower optimization model provides potential to serve as a powerful tool for improving
Empowering Oncology Nurses to Lead Change Through a Shared Governance Project.
Gordon, Jeanine N
2016-11-01
Nurses at the bed- or chairside are knowledgeable about clinical and operational concerns that need improvement and, consequently, are in the best position to generate and evaluate practical options and potential solutions to improve efficacy and care processes. Implementation of a shared governance model is effective in engaging staff nurses to make meaningful and sustainable change in patient care processes.
Bone Repair and Military Readiness
2014-08-01
based resin superior to polymethyl methacrylate ( PMMA ) with many improved properties such as significantly less polymerization stress without an...in animal models. By addressing the shortcomings of current PMMA bone cement, the development of the novel silorane bone cement will result in a...and heat generation. We have developed a silorane based resin superior to polymethyl methacrylate ( PMMA ) with many improved properties such as
Method of locating underground mines fires
Laage, Linneas; Pomroy, William
1992-01-01
An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.
Irimia, Andrei; Goh, S.-Y. Matthew; Torgerson, Carinna M.; Stein, Nathan R.; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.
2013-01-01
Objective To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Methods Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. Results We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Conclusion Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. PMID:24011495
Research on fuzzy PID control to electronic speed regulator
NASA Astrophysics Data System (ADS)
Xu, Xiao-gang; Chen, Xue-hui; Zheng, Sheng-guo
2007-12-01
As an important part of diesel engine, the speed regulator plays an important role in stabilizing speed and improving engine's performance. Because there are so many model parameters of diesel-engine considered in traditional PID control and these parameters present non-linear characteristic.The method to adjust engine speed using traditional PID is not considered as a best way. Especially for the diesel-engine generator set. In this paper, the Fuzzy PID control strategy is proposed. Some problems about its utilization in electronic speed regulator are discussed. A mathematical model of electric control system for diesel-engine generator set is established and the way of the PID parameters in the model to affect the function of system is analyzed. And then it is proposed the differential coefficient must be applied in control design for reducing dynamic deviation of system and adjusting time. Based on the control theory, a study combined control with PID calculation together for turning fuzzy PID parameter is implemented. And also a simulation experiment about electronic speed regulator system was conducted using Matlab/Simulink and the Fuzzy-Toolbox. Compared with the traditional PID Algorithm, the simulated results presented obvious improvements in the instantaneous speed governing rate and steady state speed governing rate of diesel-engine generator set when the fuzzy logic control strategy used.
Irimia, Andrei; Goh, S-Y Matthew; Torgerson, Carinna M; Stein, Nathan R; Chambers, Micah C; Vespa, Paul M; Van Horn, John D
2013-10-01
To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. Published by Elsevier B.V.
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio
1997-01-01
In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.
The contribution of mouse models to the understanding of constitutional thrombocytopenia.
Léon, Catherine; Dupuis, Arnaud; Gachet, Christian; Lanza, François
2016-08-01
Constitutional thrombocytopenias result from platelet production abnormalities of hereditary origin. Long misdiagnosed and poorly studied, knowledge about these rare diseases has increased considerably over the last twenty years due to improved technology for the identification of mutations, as well as an improvement in obtaining megakaryocyte culture from patient hematopoietic stem cells. Simultaneously, the manipulation of mouse genes (transgenesis, total or conditional inactivation, introduction of point mutations, random chemical mutagenesis) have helped to generate disease models that have contributed greatly to deciphering patient clinical and laboratory features. Most of the thrombocytopenias for which the mutated genes have been identified now have a murine model counterpart. This review focuses on the contribution that these mouse models have brought to the understanding of hereditary thrombocytopenias with respect to what was known in humans. Animal models have either i) provided novel information on the molecular and cellular pathways that were missing from the patient studies; ii) improved our understanding of the mechanisms of thrombocytopoiesis; iii) been instrumental in structure-function studies of the mutated gene products; and iv) been an invaluable tool as preclinical models to test new drugs or develop gene therapies. At present, the genetic determinants of thrombocytopenia remain unknown in almost half of all cases. Currently available high-speed sequencing techniques will identify new candidate genes, which will in turn allow the generation of murine models to confirm and further study the abnormal phenotype. In a complementary manner, programs of random mutagenesis in mice should also identify new candidate genes involved in thrombocytopenia. Copyright© Ferrata Storti Foundation.
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J
2006-11-01
The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.
NASA Astrophysics Data System (ADS)
Sun, H. Y.; Lu, B. X.; Wang, M.; Guo, Q. F.; Feng, Q. K.
2017-10-01
The swarm parameters of the negative corona discharge are improved to calculate the discharge model under different environmental conditions. The effects of temperature, humidity, and air pressure are studied using a conventional needle-to-plane configuration in air. The electron density, electric field, electron generation rate, and photoelectron generation rate are discussed in this paper. The role of photoionization under these conditions is also studied by numerical simulation. The photoelectrons generated in weak ionization region are proved to be dominant.
Use of bioreactors for culturing human retinal organoids improves photoreceptor yields.
Ovando-Roche, Patrick; West, Emma L; Branch, Matthew J; Sampson, Robert D; Fernando, Milan; Munro, Peter; Georgiadis, Anastasios; Rizzi, Matteo; Kloc, Magdalena; Naeem, Arifa; Ribeiro, Joana; Smith, Alexander J; Gonzalez-Cordero, Anai; Ali, Robin R
2018-06-13
The use of human pluripotent stem cell-derived retinal cells for cell therapy strategies and disease modelling relies on the ability to obtain healthy and organised retinal tissue in sufficient quantities. Generating such tissue is a lengthy process, often taking over 6 months of cell culture, and current approaches do not always generate large quantities of the major retinal cell types required. We adapted our previously described differentiation protocol to investigate the use of stirred-tank bioreactors. We used immunohistochemistry, flow cytometry and electron microscopy to characterise retinal organoids grown in standard and bioreactor culture conditions. Our analysis revealed that the use of bioreactors results in improved laminar stratification as well as an increase in the yield of photoreceptor cells bearing cilia and nascent outer-segment-like structures. Bioreactors represent a promising platform for scaling up the manufacture of retinal cells for use in disease modelling, drug screening and cell transplantation studies.
Improving inflow forecasting into hydropower reservoirs through a complementary modelling framework
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.
2014-10-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead-time is considered within the day-ahead (Elspot) market of the Nordic exchange market. We present here a new approach for issuing hourly reservoir inflow forecasts that aims to improve on existing forecasting models that are in place operationally, without needing to modify the pre-existing approach, but instead formulating an additive or complementary model that is independent and captures the structure the existing model may be missing. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. The procedure presented comprises an error model added on top of an un-alterable constant parameter conceptual model, the models being demonstrated with reference to the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead-times up to 17 h. Season based evaluations indicated that the improvement in inflow forecasts varies across seasons and inflow forecasts in autumn and spring are less successful with the 95% prediction interval bracketing less than 95% of the observations for lead-times beyond 17 h.
Generation capacity expansion planning in deregulated electricity markets
NASA Astrophysics Data System (ADS)
Sharma, Deepak
With increasing demand of electric power in the context of deregulated electricity markets, a good strategic planning for the growth of the power system is critical for our tomorrow. There is a need to build new resources in the form of generation plants and transmission lines while considering the effects of these new resources on power system operations, market economics and the long-term dynamics of the economy. In deregulation, the exercise of generation planning has undergone a paradigm shift. The first stage of generation planning is now undertaken by the individual investors. These investors see investments in generation capacity as an increasing business opportunity because of the increasing market prices. Therefore, the main objective of such a planning exercise, carried out by individual investors, is typically that of long-term profit maximization. This thesis presents some modeling frameworks for generation capacity expansion planning applicable to independent investor firms in the context of power industry deregulation. These modeling frameworks include various technical and financing issues within the process of power system planning. The proposed modeling frameworks consider the long-term decision making process of investor firms, the discrete nature of generation capacity addition and incorporates transmission network modeling. Studies have been carried out to examine the impact of the optimal investment plans on transmission network loadings in the long-run by integrating the generation capacity expansion planning framework within a modified IEEE 30-bus transmission system network. The work assesses the importance of arriving at an optimal IRR at which the firm's profit maximization objective attains an extremum value. The mathematical model is further improved to incorporate binary variables while considering discrete unit sizes, and subsequently to include the detailed transmission network representation. The proposed models are novel in the sense that the planning horizon is split into plan sub-periods so as to minimize the overall risks associated with long-term plan models, particularly in the context of deregulation.
Wildlife corridors based on the spatial modeling of the human pressure: A Portuguese case study
Lara Nunes; Ana Luisa Gomes; Alexandra Fonseca
2015-01-01
In times of economical crisis, rewilding can be a less costly conservation management approach, able to generate economic value from wild lands and to rural communities. Simultaneously, improvement of connectivity between protected areas was identified as a global priority for conservation. Allying the rewilding concept and connectivity concern, a model for...
Improving Emergency Management by Modeling Ant Colonies
2015-03-01
LEFT BLANK vii TABLE OF CONTENTS I. THE INCIDENT COMMAND SYSTEM AND AUTONOMOUS ACTORS ......1 A. PROBLEM STATEMENT...managerial level tasking.12 The Oklahoma City bombing has generally been viewed as a success for the ICS model; however, there were numerous occurrences...developed. The youngest generation of ant 25 Bert Holldobler and Edward O. Wilson, The Ants
Budget constraints and policies that limit primary data collection have fueled a practice of transferring estimates (or models to generate estimates) of ecological endpoints from sites where primary data exists to sites where little to no primary data were collected. Whereas bene...
USDA-ARS?s Scientific Manuscript database
The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...
Possible Improvements to MCNP6 and its CEM/LAQGSM Event-Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashnik, Stepan Georgievich
2015-08-04
This report is intended to the MCNP6 developers and sponsors of MCNP6. It presents a set of suggested possible future improvements to MCNP6 and to its CEM03.03 and LAQGSM03.03 event-generators. A few suggested modifications of MCNP6 are quite simple, aimed at avoiding possible problems with running MCNP6 on various computers, i.e., these changes are not expected to change or improve any results, but should make the use of MCNP6 easier; such changes are expected to require limited man-power resources. On the other hand, several other suggested improvements require a serious further development of nuclear reaction models, are expected to improvemore » significantly the predictive power of MCNP6 for a number of nuclear reactions; but, such developments require several years of work by real experts on nuclear reactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Compo, Gilbert P
As an important step toward a coupled data assimilation system for generating reanalysis fields needed to assess climate model projections, the Ocean Atmosphere Coupled Reanalysis for Climate Applications (OARCA) project assesses and improves the longest reanalyses currently available of the atmosphere and ocean: the 20th Century Reanalysis Project (20CR) and the Simple Ocean Data Assimilation with sparse observational input (SODAsi) system, respectively. In this project, we make off-line but coordinated improvements in the 20CR and SODAsi datasets, with improvements in one feeding into improvements of the other through an iterative generation of new versions. These datasets now span from themore » 19th to 21st centuries. We then study the extreme weather and variability from days to decades of the resulting datasets. A total of 24 publications have been produced in this project.« less
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Explosively Generated Plasmas: Measurement and Models of Shock Generation and Material Interactions
NASA Astrophysics Data System (ADS)
Emery, Samuel; Elert, Mark; Giannuzzi, Paul; Le, Ryan; McCarthy, Daniel; Schweigert, Igor
2017-06-01
Explosively generated plasmas (EGPs) are created by the focusing of a shock produced from an explosive driver via a conical waveguide. In the waveguide, the gases from the explosive along with the trapped air are accelerated and compressed (via Mach stemming) to such extent that plasma is produced. These EGPs have been measured in controlled experiments to achieve temperatures on the order of 1 eV and velocities as high as 25 km/s. We have conducted a combined modeling and measurement effort to increase the understanding for design purposes of the shock generation of EGPs and the interaction of EGP with explosive materials. Such efforts have led to improved measures of pressure and temperature, spatial structure of the plasma, and the decomposition/deflagration behavior of RDX upon exposure to an EGP. Funding provided by the Environmental Security Technology Certification Program (ESTCP) Munitions Response program area.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Op den Akker, Harm; Cabrita, Miriam; Op den Akker, Rieks; Jones, Valerie M; Hermens, Hermie J
2015-06-01
This paper presents a comprehensive and practical framework for automatic generation of real-time tailored messages in behavior change applications. Basic aspects of motivational messages are time, intention, content and presentation. Tailoring of messages to the individual user may involve all aspects of communication. A linear modular system is presented for generating such messages. It is explained how properties of user and context are taken into account in each of the modules of the system and how they affect the linguistic presentation of the generated messages. The model of motivational messages presented is based on an analysis of existing literature as well as the analysis of a corpus of motivational messages used in previous studies. The model extends existing 'ontology-based' approaches to message generation for real-time coaching systems found in the literature. Practical examples are given on how simple tailoring rules can be implemented throughout the various stages of the framework. Such examples can guide further research by clarifying what it means to use e.g. user targeting to tailor a message. As primary example we look at the issue of promoting daily physical activity. Future work is pointed out in applying the present model and framework, defining efficient ways of evaluating individual tailoring components, and improving effectiveness through the creation of accurate and complete user- and context models. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masi, K; Ditman, M; Marsh, R
Purpose: There is potentially a wide variation in plan quality for a certain disease site, even for clinics located in the same system of hospitals. We have used a prostate-specific knowledge-based planning (KBP) model as a quality control tool to investigate the variation in prostate treatment planning across a network of affiliated radiation oncology departments. Methods: A previously created KBP model was applied to 10 patients each from 4 community-based clinics (Clinics A, B, C, and D). The KBP model was developed using RapidPlan (Eclipse v13.5, Varian Medical Systems) from 60 prostate/prostate bed IMRT plans that were originally planned usingmore » an in-house treatment planning system at the central institution of the community-based clinics. The dosimetric plan quality (target coverage and normal-tissue sparing) of each model-generated plan was compared to the respective clinically-used plan. Each community-based clinic utilized the same planning goals to develop the clinically-used plans that were used at the main institution. Results: Across all 4 clinics, the model-generated plans decreased the mean dose to the rectum by varying amounts (on average, 12.5, 2.6, 4.5, and 2.7 Gy for Clinics A, B, C, and D, respectively). The mean dose to the bladder also decreased with the model-generated plans (5.4, 2.3, 3.0, and 4.1 Gy, respectively). The KBP model also identified that target coverage (D95%) improvements were possible for for Clinics A, B, and D (0.12, 1.65, and 2.75%) while target coverage decreased by 0.72% for Clinic C, demonstrating potentially different trade-offs made in clinical plans at different institutions. Conclusion: Quality control of dosimetric plan quality across a system of radiation oncology practices is possible with knowledge-based planning. By using a quality KBP model, smaller community-based clinics can potentially identify the areas of their treatment plans that may be improved, whether it be in normal-tissue sparing or improved target coverage. M. Matuszak has research funding for KBP from Varian Medical Systems.« less
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
GenePRIMP: A Gene Prediction Improvement Pipeline For Prokaryotic Genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyrpides, Nikos C.; Ivanova, Natalia N.; Pati, Amrita
2010-07-08
GenePRIMP (Gene Prediction Improvement Pipeline, Http://geneprimp.jgi-psf.org), a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missing genes, and split genes. We show that manual curation of gene models using the anomaly reports generated by GenePRIMP improves their quality and demonstrate the applicability of GenePRIMP in improving finishing quality and comparing different genome sequencing and annotation technologies. Keywords in context: Gene model, Quality Control, Translation start sites, Automatic correction. Hardware requirements; PC, MAC; Operating System: UNIX/LINUX; Compiler/Version: Perl 5.8.5 or higher; Special requirements: NCBI Blast and nr installation; File Types:more » Source Code, Executable module(s), Sample problem input data; installation instructions other; programmer documentation. Location/transmission: http://geneprimp.jgi-psf.org/gp.tar.gz« less
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
A statistical shape model of the human second cervical vertebra.
Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon
2015-07-01
Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Autonomous learning in humanoid robotics through mental imagery.
Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo
2013-05-01
In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Solar thermoelectric generator
Toberer, Eric S.; Baranowski, Lauryn L.; Warren, Emily L.
2016-05-03
Solar thermoelectric generators (STEGs) are solid state heat engines that generate electricity from concentrated sunlight. A novel detailed balance model for STEGs is provided and applied to both state-of-the-art and idealized materials. STEGs can produce electricity by using sunlight to heat one side of a thermoelectric generator. While concentrated sunlight can be used to achieve extremely high temperatures (and thus improved generator efficiency), the solar absorber also emits a significant amount of black body radiation. This emitted light is the dominant loss mechanism in these generators. In this invention, we propose a solution to this problem that eliminates virtually all of the emitted black body radiation. This enables solar thermoelectric generators to operate at higher efficiency and achieve said efficient with lower levels of optical concentration. The solution is suitable for both single and dual axis solar thermoelectric generators.
NASA Astrophysics Data System (ADS)
Piggott, Alfred J., III
With increased public interest in protecting the environment, scientists and engineers aim to improve energy conversion efficiency. Thermoelectrics offer many advantages as thermal management technology. When compared to vapor compression refrigeration, above approximately 200 to 600 watts, cost in dollars per watt as well as COP are not advantageous for thermoelectrics. The goal of this work was to determine if optimized pulse supercooling operation could improve cooling capacity or efficiency of a thermoelectric device. The basis of this research is a thermal-electrical analogy based modeling study using SPICE. Two models were developed. The first model, a standalone thermocouple with no attached mass to be cooled. The second, a system that includes a module attached to a heat generating mass. With the thermocouple study, a new approach of generating response surfaces with characteristic parameters was applied. The current pulse height and pulse on-time was identified for maximizing Net Transient Advantage, a newly defined metric. The corresponding pulse height and pulse on-time was utilized for the system model. Along with the traditional steady state starting current of Imax, Iopt was employed. The pulse shape was an isosceles triangle. For the system model, metrics new to pulse cooling were Qc, power consumption and COP. The effects of optimized current pulses were studied by changing system variables. Further studies explored time spacing between pulses and temperature distribution in the thermoelement. It was found net Q c over an entire pulse event can be improved over Imax steady operation but not over steady I opt operation. Qc can be improved over Iopt operation but only during the early part of the pulse event. COP is reduced in transient pulse operation due to the different time constants of Qc and Pin. In some cases lower performance interface materials allow more Qc and better COP during transient operation than higher performance interface materials. Important future work might look at developing innovative ways of biasing Joule heat to Th..
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Oh, Hong-Choon; Toh, Hong-Guan; Giap Cheong, Eddy Seng
2011-11-01
Using the classical process improvement framework of Plan-Do-Study-Act (PDSA), the diagnostic radiology department of a tertiary hospital identified several patient cycle time reduction strategies. Experimentation of these strategies (which included procurement of new machines, hiring of new staff, redesign of queue system, etc.) through pilot scale implementation was impractical because it might incur substantial expenditure or be operationally disruptive. With this in mind, simulation modeling was used to test these strategies via performance of "what if" analyses. Using the output generated by the simulation model, the team was able to identify a cost-free cycle time reduction strategy, which subsequently led to a reduction of patient cycle time and achievement of a management-defined performance target. As healthcare professionals work continually to improve healthcare operational efficiency in response to rising healthcare costs and patient expectation, simulation modeling offers an effective scientific framework that can complement established process improvement framework like PDSA to realize healthcare process enhancement. © 2011 National Association for Healthcare Quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Many Molecular Properties from One Kernel in Chemical Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less
CheckMATE 2: From the model to the limit
NASA Astrophysics Data System (ADS)
Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten
2017-12-01
We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Temporal Subtraction of Digital Breast Tomosynthesis Images for Improved Mass Detection
2009-11-01
imaging using two distinct methods7-15: mathematically based models defined by geometric primitives and voxelized models derived from real human...trees to complete them. We also plan to add further detail by defining the Cooper’s ligaments using geometrical NURBS surfaces. Realistic...generated model for the coronary arterial tree based on multislice CT and morphometric data," Medical Imaging 2006: Physics of Medical Imaging 6142
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
NASA Astrophysics Data System (ADS)
Hasan, T.; Kang, Y.-S.; Kim, Y.-J.; Park, S.-J.; Jang, S.-Y.; Hu, K.-Y.; Koop, E. J.; Hinnen, P. C.; Voncken, M. M. A. J.
2016-03-01
Advancement of the next generation technology nodes and emerging memory devices demand tighter lithographic focus control. Although the leveling performance of the latest-generation scanners is state of the art, challenges remain at the wafer edge due to large process variations. There are several customer configurable leveling control options available in ASML scanners, some of which are application specific in their scope of leveling improvement. In this paper, we assess the usability of leveling non-correctable error models to identify yield limiting edge dies. We introduce a novel dies-inspec based holistic methodology for leveling optimization to guide tool users in selecting an optimal configuration of leveling options. Significant focus gain, and consequently yield gain, can be achieved with this integrated approach. The Samsung site in Hwaseong observed an improved edge focus performance in a production of a mid-end memory product layer running on an ASML NXT 1960 system. 50% improvement in focus and a 1.5%p gain in edge yield were measured with the optimized configurations.
Perspective: Markov models for long-timescale biomolecular dynamics.
Schwantes, C R; McGibbon, R T; Pande, V S
2014-09-07
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.
Lip-reading enhancement for law enforcement
NASA Astrophysics Data System (ADS)
Theobald, Barry J.; Harvey, Richard; Cox, Stephen J.; Lewis, Colin; Owen, Gari P.
2006-09-01
Accurate lip-reading techniques would be of enormous benefit for agencies involved in counter-terrorism and other law-enforcement areas. Unfortunately, there are very few skilled lip-readers, and it is apparently a difficult skill to transmit, so the area is under-resourced. In this paper we investigate the possibility of making the lip-reading task more amenable to a wider range of operators by enhancing lip movements in video sequences using active appearance models. These are generative, parametric models commonly used to track faces in images and video sequences. The parametric nature of the model allows a face in an image to be encoded in terms of a few tens of parameters, while the generative nature allows faces to be re-synthesised using the parameters. The aim of this study is to determine if exaggerating lip-motions in video sequences by amplifying the parameters of the model improves lip-reading ability. We also present results of lip-reading tests undertaken by experienced (but non-expert) adult subjects who claim to use lip-reading in their speech recognition process. The results, which are comparisons of word error-rates on unprocessed and processed video, are mixed. We find that there appears to be the potential to improve the word error rate but, for the method to improve the intelligibility there is need for more sophisticated tracking and visual modelling. Our technique can also act as an expression or visual gesture amplifier and so has applications to animation and the presentation of information via avatars or synthetic humans.
An Update on Improvements to NiCE Support for PROTEUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay
2015-09-01
The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
A quasi-static model of global atmospheric electricity. I - The lower atmosphere
NASA Technical Reports Server (NTRS)
Hays, P. B.; Roble, R. G.
1979-01-01
A quasi-steady model of global lower atmospheric electricity is presented. The model considers thunderstorms as dipole electric generators that can be randomly distributed in various regions and that are the only source of atmospheric electricity and includes the effects of orography and electrical coupling along geomagnetic field lines in the ionosphere and magnetosphere. The model is used to calculate the global distribution of electric potential and current for model conductivities and assumed spatial distributions of thunderstorms. Results indicate that large positive electric potentials are generated over thunderstorms and penetrate to ionospheric heights and into the conjugate hemisphere along magnetic field lines. The perturbation of the calculated electric potential and current distributions during solar flares and subsequent Forbush decreases is discussed, and future measurements of atmospheric electrical parameters and modifications of the model which would improve the agreement between calculations and measurements are suggested.
Band, Leah R.; Fozard, John A.; Godin, Christophe; Jensen, Oliver E.; Pridmore, Tony; Bennett, Malcolm J.; King, John R.
2012-01-01
Over recent decades, we have gained detailed knowledge of many processes involved in root growth and development. However, with this knowledge come increasing complexity and an increasing need for mechanistic modeling to understand how those individual processes interact. One major challenge is in relating genotypes to phenotypes, requiring us to move beyond the network and cellular scales, to use multiscale modeling to predict emergent dynamics at the tissue and organ levels. In this review, we highlight recent developments in multiscale modeling, illustrating how these are generating new mechanistic insights into the regulation of root growth and development. We consider how these models are motivating new biological data analysis and explore directions for future research. This modeling progress will be crucial as we move from a qualitative to an increasingly quantitative understanding of root biology, generating predictive tools that accelerate the development of improved crop varieties. PMID:23110897
Distributed generation capabilities of the national energy modeling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaCommare, Kristina Hamachi; Edwards, Jennifer L.; Marnay, Chris
2003-01-01
This report describes Berkeley Lab's exploration of how the National Energy Modeling System (NEMS) models distributed generation (DG) and presents possible approaches for improving how DG is modeled. The on-site electric generation capability has been available since the AEO2000 version of NEMS. Berkeley Lab has previously completed research on distributed energy resources (DER) adoption at individual sites and has developed a DER Customer Adoption Model called DER-CAM. Given interest in this area, Berkeley Lab set out to understand how NEMS models small-scale on-site generation to assess how adequately DG is treated in NEMS, and to propose improvements or alternatives. Themore » goal is to determine how well NEMS models the factors influencing DG adoption and to consider alternatives to the current approach. Most small-scale DG adoption takes place in the residential and commercial modules of NEMS. Investment in DG ultimately offsets purchases of electricity, which also eliminates the losses associated with transmission and distribution (T&D). If the DG technology that is chosen is photovoltaics (PV), NEMS assumes renewable energy consumption replaces the energy input to electric generators. If the DG technology is fuel consuming, consumption of fuel in the electric utility sector is replaced by residential or commercial fuel consumption. The waste heat generated from thermal technologies can be used to offset the water heating and space heating energy uses, but there is no thermally activated cooling capability. This study consists of a review of model documentation and a paper by EIA staff, a series of sensitivity runs performed by Berkeley Lab that exercise selected DG parameters in the AEO2002 version of NEMS, and a scoping effort of possible enhancements and alternatives to NEMS current DG capabilities. In general, the treatment of DG in NEMS is rudimentary. The penetration of DG is determined by an economic cash-flow analysis that determines adoption based on the n umber of years to a positive cash flow. Some important technologies, e.g. thermally activated cooling, are absent, and ceilings on DG adoption are determined by some what arbitrary caps on the number of buildings that can adopt DG. These caps are particularly severe for existing buildings, where the maximum penetration for any one technology is 0.25 percent. On the other hand, competition among technologies is not fully considered, and this may result in double-counting for certain applications. A series of sensitivity runs show greater penetration with net metering enhancements and aggressive tax credits and a more limited response to lowered DG technology costs. Discussion of alternatives to the current code is presented in Section 4. Alternatives or improvements to how DG is modeled in NEMS cover three basic areas: expanding on the existing total market for DG both by changing existing parameters in NEMS and by adding new capabilities, such as for missing technologies; enhancing the cash flow analysis but incorporating aspects of DG economics that are not currently represented, e.g. complex tariffs; and using an external geographic information system (GIS) driven analysis that can better and more intuitively identify niche markets.« less
NASA Astrophysics Data System (ADS)
Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier
2018-02-01
This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.
Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J
2013-08-01
Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Shih -Chieh; Sale, Michael J.; Ashfaq, Moetasim
Federal hydropower plants account for approximately half of installed US conventional hydropower capacity, and are an important part of the national renewable energy portfolio. Utilizing the strong linear relationship between the US Geological Survey WaterWatch runoff and annual hydropower generation, a runoff-based assessment approach is introduced in this study to project changes in annual and regional hydropower generation in multiple power marketing areas. Future climate scenarios are developed with a series of global and regional climate models, and the model output is bias-corrected to be consistent with observed data for the recent past. Using this approach, the median decrease inmore » annual generation at federal projects is projected to be less than –2 TWh, with an estimated ensemble uncertainty of ±9 TWh. Although these estimates are similar to the recently observed variability in annual hydropower generation, and may therefore appear to be manageable, significantly seasonal runoff changes are projected and it may pose significant challenges in water systems with higher limits on reservoir storage and operational flexibility. Lastly, future assessments will be improved by incorporating next-generation climate models, by closer examination of extreme events and longer-term change, and by addressing the interactions among hydropower and other water uses.« less
Kao, Shih -Chieh; Sale, Michael J.; Ashfaq, Moetasim; ...
2014-12-18
Federal hydropower plants account for approximately half of installed US conventional hydropower capacity, and are an important part of the national renewable energy portfolio. Utilizing the strong linear relationship between the US Geological Survey WaterWatch runoff and annual hydropower generation, a runoff-based assessment approach is introduced in this study to project changes in annual and regional hydropower generation in multiple power marketing areas. Future climate scenarios are developed with a series of global and regional climate models, and the model output is bias-corrected to be consistent with observed data for the recent past. Using this approach, the median decrease inmore » annual generation at federal projects is projected to be less than –2 TWh, with an estimated ensemble uncertainty of ±9 TWh. Although these estimates are similar to the recently observed variability in annual hydropower generation, and may therefore appear to be manageable, significantly seasonal runoff changes are projected and it may pose significant challenges in water systems with higher limits on reservoir storage and operational flexibility. Lastly, future assessments will be improved by incorporating next-generation climate models, by closer examination of extreme events and longer-term change, and by addressing the interactions among hydropower and other water uses.« less
Oribe-Garcia, Iraia; Kamara-Esteban, Oihane; Martin, Cristina; Macarulla-Arenaza, Ana M; Alonso-Vicario, Ainhoa
2015-05-01
The planning of waste management strategies needs tools to support decisions at all stages of the process. Accurate quantification of the waste to be generated is essential for both the daily management (short-term) and proper design of facilities (long-term). Designing without rigorous knowledge may have serious economic and environmental consequences. The present works aims at identifying relevant socio-economic features of municipalities regarding Household Waste (HW) generation by means of factor models. Factor models face two main drawbacks, data collection and identifying relevant explanatory variables within a heterogeneous group. Grouping similar characteristics observations within a group may favour the deduction of more robust models. The methodology followed has been tested with Biscay Province because it stands out for having very different municipalities ranging from very rural to urban ones. Two main models are developed, one for the overall province and a second one after clustering the municipalities. The results prove that relating municipalities with specific characteristics, improves the results in a very heterogeneous situation. The methodology has identified urban morphology, tourism activity, level of education and economic situation as the most influencing characteristics in HW generation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Lambert-Girard, Simon; Allard, Martin; Piché, Michel; Babin, François
2015-04-01
The development of a novel broadband and tunable optical parametric generator (OPG) is presented. The OPG properties are studied numerically and experimentally in order to optimize the generator's use in a broadband spectroscopic LIDAR operating in the short and mid-infrared. This paper discusses trade-offs to be made on the properties of the pump, crystal, and seeding signal in order to optimize the pulse spectral density and divergence while enabling energy scaling. A seed with a large spectral bandwidth is shown to enhance the pulse-to-pulse stability and optimize the pulse spectral density. A numerical model shows excellent agreement with output power measurements; the model predicts that a pump having a large number of longitudinal modes improves conversion efficiency and pulse stability.
NASA Astrophysics Data System (ADS)
Zhang, Xia; Niu, Guo-Yue; Elshall, Ahmed S.; Ye, Ming; Barron-Gafford, Greg A.; Pavao-Zuckerman, Mitch
2014-09-01
Soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect") are poorly understood. We developed and assessed five evolving microbial enzyme models against field measurements from a semiarid savannah characterized by pulsed precipitation to understand the mechanisms to generate the Birch pulses. The five models evolve from an existing four-carbon (C) pool model to models with additional C pools and explicit representations of soil moisture controls on C degradation and microbial uptake rates. Assessing the models using techniques of model selection and model averaging suggests that models with additional C pools for accumulation of degraded C in the dry zone of the soil pore space result in a higher probability of reproducing the observed Birch pulses. Degraded C accumulated in dry soil pores during dry periods becomes immediately accessible to microbes in response to rainstorms, providing a major mechanism to generate respiration pulses. Explicitly representing the transition of degraded C and enzymes between dry and wet soil pores in response to soil moisture changes and soil moisture controls on C degradation and microbial uptake rates improve the models' efficiency and robustness in simulating the Birch effect. Assuming that enzymes in the dry soil pores facilitate degradation of complex C during dry periods (though at a lower rate) results in a greater accumulation of degraded C and thus further improves the models' performance. However, the actual mechanism inducing the greater accumulation of labile C needs further experimental studies.
Wellness in the healing ministry.
Burke, B K
1993-09-01
Wellness has gained a foothold in most healthcare delivery systems because of its focus: Keeping people well is the ultimate goal of a healthcare system. Three generations of wellness models have evolved over the past 16 years. First-generation wellness efforts focus on reducing health risks. Hospitals have developed programs and services to improve customers' and employees' health status. And corporations are lowering health risks by offering employees worksite fitness centers, cholesterol and other screenings, and smoking-cessation programs. Second-generation wellness efforts link wellness to benefits. Hospitals and corporations have implemented health incentive programs, structured to reward people for maintaining low ranges in their controllable health risk factors. Third-generation wellness efforts show that connectedness can improve health. Such efforts emphasize the importance of spiritual and emotional well-being as an inextricable part of physical health and healing. Today prayer and support groups, guided imagery, and prayerful meditation are becoming more mainstream. Such wellness approaches encourage persons to think and care for themselves more holistically.
NASA Astrophysics Data System (ADS)
Liu, Tongjun; Wang, Tongcai; Luan, Weiling; Cao, Qimin
2017-05-01
Waste heat recovery through thermoelectric generators is a promising way to improve energy conversion efficiency. This paper proposes a type of heat pipe assisted thermoelectric generator (HP-TEG) system. The expandable evaporator and condenser surface of the heat pipe facilitates the intensive assembly of thermoelectric (TE) modules to compose a compact device. Compared with a conventional layer structure thermoelectric generator, this system is feasible for the installment of more TE couples, thus increasing power output. To investigate the performance of the HP-TEG and the optimal number of TE couples, a theoretical model was presented and verified by experiment results. Further theoretical analysis results showed the performance of the HP-TEG could be further improved by optimizing the parameters, including the inlet air temperature, the thermal resistance of the heating section, and thermal resistance of the cooling structure. Moreover, applying a proper number of TE couples is important to acquire the best power output performance.
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
AFQT Score Forecasting Models for Regional Estimation of Qualified Military Available
1990-06-01
century with the work of the British scholar, Sir Francis Galton . Galton believed that genetics determined mental ability and in his well known book...entitled Heredity Genius (1869), he concluded that success ran in families because great intelligence was passed from generation to generation through...for developing such a testing system was in contrast to that of Galton , for it suggested that the mental ability of subnormal children could improve
Feasibility survey of thermoelectric conversion technology using semiconductors
NASA Astrophysics Data System (ADS)
1993-03-01
The paper takes notice to thermoelectric conversion technology using semiconductors and investigates it in a wide range from high temperature to low temperature to study its feasibility. It is found that in Bi-Te alloy elements applicable to a temperature range of around 200(degree)C, some are over 3.5(times)10(sup -3)K(sup -1) in performance index, and performance of the element can be practically improved in the near future. The thermoelectric power generation system using waste heat from the fuel cell power plant, which is 5-6% in conversion efficiency, can generate output more than 100kW and is expected to improve by approximately 1% in plant overall efficiency. The construction cost, however, is around 1.6-1.9 million yen/kW. The thermoelectric power generation plant which is modeled on No.2 generator of Hatchobaru geothermal power plant can generate electric output of 10-12.5MW, which is smaller than that of the conventional geothermal power generation. The construction cost is around 3.2-4.1 million yen/kW. Even if advantage of the system in running cost is considered, attractive systematization seems to be difficult.
Object Detection from MMS Imagery Using Deep Learning for Generation of Road Orthophotos
NASA Astrophysics Data System (ADS)
Li, Y.; Sakamoto, M.; Shinohara, T.; Satoh, T.
2018-05-01
In recent years, extensive research has been conducted to automatically generate high-accuracy and high-precision road orthophotos using images and laser point cloud data acquired from a mobile mapping system (MMS). However, it is necessary to mask out non-road objects such as vehicles, bicycles, pedestrians and their shadows in MMS images in order to eliminate erroneous textures from the road orthophoto. Hence, we proposed a novel vehicle and its shadow detection model based on Faster R-CNN for automatically and accurately detecting the regions of vehicles and their shadows from MMS images. The experimental results show that the maximum recall of the proposed model was high - 0.963 (intersection-over-union > 0.7) - and the model could identify the regions of vehicles and their shadows accurately and robustly from MMS images, even when they contain varied vehicles, different shadow directions, and partial occlusions. Furthermore, it was confirmed that the quality of road orthophoto generated using vehicle and its shadow masks was significantly improved as compared to those generated using no masks or using vehicle masks only.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...
2016-02-24
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less
PconsFold: improved contact predictions improve protein models.
Michel, Mirco; Hayat, Sikander; Skwark, Marcin J; Sander, Chris; Marks, Debora S; Elofsson, Arne
2014-09-01
Recently it has been shown that the quality of protein contact prediction from evolutionary information can be improved significantly if direct and indirect information is separated. Given sufficiently large protein families, the contact predictions contain sufficient information to predict the structure of many protein families. However, since the first studies contact prediction methods have improved. Here, we ask how much the final models are improved if improved contact predictions are used. In a small benchmark of 15 proteins, we show that the TM-scores of top-ranked models are improved by on average 33% using PconsFold compared with the original version of EVfold. In a larger benchmark, we find that the quality is improved with 15-30% when using PconsC in comparison with earlier contact prediction methods. Further, using Rosetta instead of CNS does not significantly improve global model accuracy, but the chemistry of models generated with Rosetta is improved. PconsFold is a fully automated pipeline for ab initio protein structure prediction based on evolutionary information. PconsFold is based on PconsC contact prediction and uses the Rosetta folding protocol. Due to its modularity, the contact prediction tool can be easily exchanged. The source code of PconsFold is available on GitHub at https://www.github.com/ElofssonLab/pcons-fold under the MIT license. PconsC is available from http://c.pcons.net/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Blanc, Elisabeth; Le Pichon, Alexis; Ceranna, Lars; Pilger, Christoph; Charlton Perez, Andrew; Smets, Pieter
2016-04-01
The International Monitoring System (IMS) developed for the verification of the Comprehensive nuclear-Test-Ban Treaty (CTBT) provides a unique global description of atmospheric disturbances generating infrasound such as extreme events (e.g. meteors, volcanoes, earthquakes, and severe weather) or human activity (e.g. explosions and supersonic airplanes). The analysis of the detected signals, recorded at global scales and over near 15 years at some stations, demonstrates that large-scale atmospheric disturbances strongly affect infrasound propagation. Their time scales vary from several tens of minutes to hours and days. Their effects are in average well resolved by the current model predictions; however, accurate spatial and temporal description is lacking in both weather and climate models. This study reviews recent results using the infrasound technology to characterize these large scale disturbances, including (i) wind fluctuations induced by gravity waves generating infrasound partial reflections and modifications of the infrasound waveguide, (ii) convection from thunderstorms and mountain waves generating gravity waves, (iii) stratospheric warming events which yield wind inversions in the stratosphere, (iv)planetary waves which control the global atmospheric circulation. Improved knowledge of these disturbances and assimilation in future models is an important objective of the ARISE (Atmospheric dynamics Research InfraStructure in Europe) project. This is essential in the context of the future verification of the CTBT as enhanced atmospheric models are necessary to assess the IMS network performance in higher resolution, reduce source location errors, and improve characterization methods.
The method for detecting small lesions in medical image based on sliding window
NASA Astrophysics Data System (ADS)
Han, Guilai; Jiao, Yuan
2016-10-01
At present, the research on computer-aided diagnosis includes the sample image segmentation, extracting visual features, generating the classification model by learning, and according to the model generated to classify and judge the inspected images. However, this method has a large scale of calculation and speed is slow. And because medical images are usually low contrast, when the traditional image segmentation method is applied to the medical image, there is a complete failure. As soon as possible to find the region of interest, improve detection speed, this topic attempts to introduce the current popular visual attention model into small lesions detection. However, Itti model is mainly for natural images. But the effect is not ideal when it is used to medical images which usually are gray images. Especially in the early stages of some cancers, the focus of a disease in the whole image is not the most significant region and sometimes is very difficult to be found. But these lesions are prominent in the local areas. This paper proposes a visual attention mechanism based on sliding window, and use sliding window to calculate the significance of a local area. Combined with the characteristics of the lesion, select the features of gray, entropy, corner and edge to generate a saliency map. Then the significant region is segmented and distinguished. This method reduces the difficulty of image segmentation, and improves the detection accuracy of small lesions, and it has great significance to early discovery, early diagnosis and treatment of cancers.
NASA Astrophysics Data System (ADS)
Klein, R.; Adler, A.; Beanlands, R. S.; de Kemp, R. A.
2007-02-01
A rubidium-82 (82Rb) elution system is described for use with positron emission tomography. Due to the short half-life of 82Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a 82Sr/82Rb generator and a bypass line to achieve a constant-activity elution of 82Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The 82Rb elution system produces accurate and reproducible constant-activity elution profiles of 82Rb activity, independent of parent 82Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using 82Rb.
Klein, R; Adler, A; Beanlands, R S; Dekemp, R A
2007-02-07
A rubidium-82 ((82)Rb) elution system is described for use with positron emission tomography. Due to the short half-life of (82)Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a (82)Sr/(82)Rb generator and a bypass line to achieve a constant-activity elution of (82)Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The (82)Rb elution system produces accurate and reproducible constant-activity elution profiles of (82)Rb activity, independent of parent (82)Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using (82)Rb.
On the Stator Slot Geometry of a Cable Wound Generator for Hydrokinetic Energy Conversion
Grabbe, Mårten; Leijon, Mats
2015-01-01
The stator slot geometry of a cable wound permanent magnet synchronous generator for hydrokinetic energy conversion is evaluated. Practical experience from winding two cable wound generators is used to propose optimized dimensions of different parts in the stator slot geometry. A thorough investigation is performed through simulations of how small geometrical changes alter the generator performance. The finite element method (FEM) is used to model the generator and the simulations show that small changes in the geometry can have large effect on the performance of the generator. Furthermore, it is concluded that the load angle is especially sensitive to small geometrical changes. A new generator design is proposed which shows improved efficiency, reduced weight, and a possibility to decrease the expensive permanent magnet material by almost one-fifth. PMID:25879072
OIDDE Learning Model: Improving Higher Order Thinking Skills of Biology Teacher Candidates
ERIC Educational Resources Information Center
Husamah; Fatmawati, Diani; Setyawan, Dwi
2018-01-01
As the massive advancement in 21st century, the role of education is to prepare generations in mastering the skills they need to face the challenges arised in their era. OIDDE is the abbreviation for Orientation, Identify, Discussion, Decision, and Engage in behaviour. The learning model designed by Hudha et al. (2016) is expected to be able to…
Specific loss power in superparamagnetic hyperthermia: nanofluid versus composite
NASA Astrophysics Data System (ADS)
Osaci, M.; Cacciola, M.
2017-01-01
Currently, the magnetic hyperthermia induced by nanoparticles is of great interest in biomedical applications. In the literature, we can find a lot of models for magnetic hyperthermia, but many of them do not give importance to a significant detail, such as the geometry of nanoparticle positions in the system. Usually, a nanofluid is treated by considering random positions of the nanoparticles, geometry that is actually characteristic to the composite nanoparticles. To assess the error which is frequently made, in this paper we propose a comparative analysis between the specific loss power (SLP) in case of a nanofluid and the SLP in case of a composite with magnetic nanoparticles. We are going to use a superparamagnetic hyperthermia model based on the improved model for calculating the Néel relaxation time in a magnetic field oblique to the nanoparticle magnetic anisotropy axes, and on the improved theoretical model LRT (linear response theory) for SLP. To generate the nanoparticle geometry in the system, we are going to apply a Monte Carlo method to a nanofluid, by minimising the interaction potentials in liquid medium and, for a composite environment, a method for generating random positions of the nanoparticles in a given volume.
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1992-07-01
The general goals of the physics and kinetic modeling projects are to: (1) improve the quantitative information extractable from PET images, and (2) develop, implement and optimize tracer kinetic models for new PET neurotransmitter/receptor ligands aided by computer simulations. Work towards improving PET quantification has included projects evaluating: (1) iterative reconstruction algorithms using supplemental boundary information, (2) automated registration of dynamic PET emission and transmission data using sinogram edge detection, and (3) automated registration of multiple subjects to a common coordinate system, including the use of non-linear warping methods. Simulation routines have been developed providing more accurate representation of datamore » generated from neurotransmitter/receptor studies. Routines consider data generated from complex compartmental models, high or low specific activity administrations, non-specific binding, pre- or post-injection of cold or competing ligands, temporal resolution of the data, and radiolabeled metabolites. Computer simulations and human PET studies have been performed to optimize kinetic models for four new neurotransmitter/receptor ligands, [{sup 11}C]TRB (muscarinic), [{sup 11}C]flumazenil (benzodiazepine), [{sup 18}F]GBR12909, (dopamine), and [{sup 11}C]NMPB (muscarinic).« less
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1992-01-01
The general goals of the physics and kinetic modeling projects are to: (1) improve the quantitative information extractable from PET images, and (2) develop, implement and optimize tracer kinetic models for new PET neurotransmitter/receptor ligands aided by computer simulations. Work towards improving PET quantification has included projects evaluating: (1) iterative reconstruction algorithms using supplemental boundary information, (2) automated registration of dynamic PET emission and transmission data using sinogram edge detection, and (3) automated registration of multiple subjects to a common coordinate system, including the use of non-linear warping methods. Simulation routines have been developed providing more accurate representation of datamore » generated from neurotransmitter/receptor studies. Routines consider data generated from complex compartmental models, high or low specific activity administrations, non-specific binding, pre- or post-injection of cold or competing ligands, temporal resolution of the data, and radiolabeled metabolites. Computer simulations and human PET studies have been performed to optimize kinetic models for four new neurotransmitter/receptor ligands, ({sup 11}C)TRB (muscarinic), ({sup 11}C)flumazenil (benzodiazepine), ({sup 18}F)GBR12909, (dopamine), and ({sup 11}C)NMPB (muscarinic).« less
A probabilistic drought forecasting framework: A combined dynamical and statistical approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Moradkhani, Hamid; Zarekarizi, Mahkameh
In order to improve drought forecasting skill, this study develops a probabilistic drought forecasting framework comprised of dynamical and statistical modeling components. The novelty of this study is to seek the use of data assimilation to quantify initial condition uncertainty with the Monte Carlo ensemble members, rather than relying entirely on the hydrologic model or land surface model to generate a single deterministic initial condition, as currently implemented in the operational drought forecasting systems. Next, the initial condition uncertainty is quantified through data assimilation and coupled with a newly developed probabilistic drought forecasting model using a copula function. The initialmore » condition at each forecast start date are sampled from the data assimilation ensembles for forecast initialization. Finally, seasonal drought forecasting products are generated with the updated initial conditions. This study introduces the theory behind the proposed drought forecasting system, with an application in Columbia River Basin, Pacific Northwest, United States. Results from both synthetic and real case studies suggest that the proposed drought forecasting system significantly improves the seasonal drought forecasting skills and can facilitate the state drought preparation and declaration, at least three months before the official state drought declaration.« less
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
Power Management and Distribution (PMAD) Model Development: Final Report
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.; ...
2017-03-23
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
iMODS: internal coordinates normal mode analysis server.
López-Blanco, José Ramón; Aliaga, José I; Quintana-Ortí, Enrique S; Chacón, Pablo
2014-07-01
Normal mode analysis (NMA) in internal (dihedral) coordinates naturally reproduces the collective functional motions of biological macromolecules. iMODS facilitates the exploration of such modes and generates feasible transition pathways between two homologous structures, even with large macromolecules. The distinctive internal coordinate formulation improves the efficiency of NMA and extends its applicability while implicitly maintaining stereochemistry. Vibrational analysis, motion animations and morphing trajectories can be easily carried out at different resolution scales almost interactively. The server is versatile; non-specialists can rapidly characterize potential conformational changes, whereas advanced users can customize the model resolution with multiple coarse-grained atomic representations and elastic network potentials. iMODS supports advanced visualization capabilities for illustrating collective motions, including an improved affine-model-based arrow representation of domain dynamics. The generated all-heavy-atoms conformations can be used to introduce flexibility for more advanced modeling or sampling strategies. The server is free and open to all users with no login requirement at http://imods.chaconlab.org. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
The Benefits of Internalizing Air Quality and Greenhouse Gas Externalities in the US Energy System
NASA Astrophysics Data System (ADS)
Brown, Kristen E.
The emission of pollutants from energy use has effects on both local air quality and the global climate, but the price of energy does not reflect these externalities. This study aims to analyze the effect that internalizing these externalities in the cost of energy would have on the US energy system, emissions, and human health. In this study, we model different policy scenarios in which fees are added to emissions related to generation and use of energy. The fees are based on values of damages estimated in the literature and are applied to upstream and combustion emissions related to electricity generation, industrial energy use, transportation energy use, residential energy use, and commercial energy use. The energy sources and emissions are modeled through 2055 in five-year time steps. The emissions in 2045 are incorporated into a continental-scale atmospheric chemistry and transport model, CMAQ, to determine the change in air quality due to different emissions reduction scenarios. A benefit analysis tool, BenMAP, is used with the air quality results to determine the monetary benefit of emissions reductions related to the improved air quality. We apply fees to emissions associated with health impacts, climate change, and a combination of both. We find that the fees we consider lead to reductions in targeted emissions as well as co-reducing non-targeted emissions. For fees on the electric sector alone, health impacting pollutant (HIP) emissions reductions are achieved mainly through control devices while Greenhouse Gas (GHG) fees are addressed through changes in generation technologies. When sector specific fees are added, reductions come mainly from the industrial and electricity generation sectors, and are achieved through a mix of energy efficiency, increased use of renewables, and control devices. Air quality is improved in almost all areas of the country with fees, including when only GHG fees are applied. Air quality tends to improve more in regions with larger emissions reductions, especially for PM2.5.
Research notes : improving freight data collection methods.
DOT National Transportation Integrated Search
2004-07-01
The overall goal of this study was to identify data collection methods capable of generating the information at a level of detail that would better fill ODOTs modeling and freight planning needs at the metropolitan level. After a review of other r...
Improved Air Combat Awareness; with AESA and Next-Generation Signal Processing
2002-09-01
competence network Building techniques Software development environment Communication Computer architecture Modeling Real-time programming Radar...memory access, skewed load and store, 3.2 GB/s BW • Performance: 400 MFLOPS Runtime environment Custom runtime routines Driver routines Hardware
Development of a Next Generation Air Quality Modeling System
In the presentation we will describe our modifications to MPAS to improve its suitability for retrospective air quality applications and show evaluations of global and regional meterological simulations. Our modifications include addition of physics schemes that we developed for...
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
On the Potential of a New Generation of Magnetometers for MEG: A Beamformer Simulation Study
Boto, Elena; Bowtell, Richard; Krüger, Peter; Fromhold, T. Mark; Morris, Peter G.; Meyer, Sofie S.; Barnes, Gareth R.; Brookes, Matthew J.
2016-01-01
Magnetoencephalography (MEG) is a sophisticated tool which yields rich information on the spatial, spectral and temporal signatures of human brain function. Despite unique potential, MEG is limited by a low signal-to-noise ratio (SNR) which is caused by both the inherently small magnetic fields generated by the brain, and the scalp-to-sensor distance. The latter is limited in current systems due to a requirement for pickup coils to be cryogenically cooled. Recent work suggests that optically-pumped magnetometers (OPMs) might be a viable alternative to superconducting detectors for MEG measurement. They have the advantage that sensors can be brought to within ~4 mm of the scalp, thus offering increased sensitivity. Here, using simulations, we quantify the advantages of hypothetical OPM systems in terms of sensitivity, reconstruction accuracy and spatial resolution. Our results show that a multi-channel whole-head OPM system offers (on average) a fivefold improvement in sensitivity for an adult brain, as well as clear improvements in reconstruction accuracy and spatial resolution. However, we also show that such improvements depend critically on accurate forward models; indeed, the reconstruction accuracy of our simulated OPM system only outperformed that of a simulated superconducting system in cases where forward field error was less than 5%. Overall, our results imply that the realisation of a viable whole-head multi-channel OPM system could generate a step change in the utility of MEG as a means to assess brain electrophysiological activity in health and disease. However in practice, this will require both improved hardware and modelling algorithms. PMID:27564416
The CAFE Experiment: A Joint Seismic and MT Investigation of the Cascadia Subduction System
2013-02-01
In this thesis we present results from inversion of data using dense arrays of collocated seismic and magnetotelluric stations located in the Cascadia...implicit in the standard MT inversion provides tools that enable us to generate a more accurate MT model. This final MT model clearly demonstrates...references within, Hacker, 2008) have given us the tools to better interpret geophysical evidence. Improvements in the thermal modeling of subduction zones
Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments
NASA Technical Reports Server (NTRS)
Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.
2008-01-01
In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
Earnest, G S; Mickelsen, R L; McCammon, J B; O'Brien, D M
1997-11-01
This study modeled the time required for a gasoline-powered, 5 horsepower (hp), 4-cycle engine to generate carbon monoxide (CO) concentrations exceeding the National Institute for Occupational Safety and Health 200-ppm ceiling and 1200-ppm immediately dangerous to life and health concentration for various room sizes and ventilation rates. The model permitted the ambiguous term "well-ventilated area" to be defined. The model was compared with field data collected at a site where two workers were poisoned while operating a 5-hp concrete saw in a bathroom having open doors and an operating ventilation system. There is agreement between both the modeled and field-generated data, indicating that hazardous CO concentrations can develop within minutes. Comparison of field and modeling data showed the measured CO generation rate at approximately one-half of the value used in the model, which may be partially because the engine used in the field was not under load during data collection. The generation rate and room size from the actual poisoning was then used in the model. The model determined that ventilation rates of nearly 5000 ft3/min (120 air changes per hour) would be required to prevent the CO concentration from exceeding the 200-ppm ceiling for short periods. Results suggest that small gasoline-powered engines should not be operated inside of buildings or in semienclosed spaces and that manufacturers of such tools should improve their warnings and develop engineering control options for better user protection.
Monthly mean simulation experiments with a course-mesh global atmospheric model
NASA Technical Reports Server (NTRS)
Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.
1978-01-01
Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.
2015-08-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.
A Generalized Framework for Modeling Next Generation 911 Implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelic, Andjelka; Aamir, Munaf Syed; Kelic, Andjelka
This document summarizes the current state of Sandia 911 modeling capabilities and then addresses key aspects of Next Generation 911 (NG911) architectures for expansion of existing models. Analysis of three NG911 implementations was used to inform heuristics , associated key data requirements , and assumptions needed to capture NG911 architectures in the existing models . Modeling of NG911 necessitates careful consideration of its complexity and the diversity of implementations. Draft heuristics for constructing NG911 models are pres ented based on the analysis along with a summary of current challenges and ways to improve future NG911 modeling efforts . We foundmore » that NG911 relies on E nhanced 911 (E911) assets such as 911 selective routers to route calls originating from traditional tel ephony service which are a majority of 911 calls . We also found that the diversity and transitional nature of NG911 implementations necessitates significant and frequent data collection to ensure that adequate model s are available for crisis action support .« less
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Kye, Bongoh; Mare, Robert D.
2014-01-01
This study examines the intergenerational effects of changes in women's education in South Korea. We define intergenerational effects as changes in the distribution of educational attainment in an offspring generation associated with the changes in a parental generation. Departing from the previous approach in research on social mobility that has focused on intergenerational association, we examine the changes in the distribution of educational attainment across generations. Using a simulation method based on Mare and Maralani's recursive population renewal model, we examine how intergenerational transmission, assortative mating, and differential fertility influence intergenerational effects. The results point to the following conclusions. First, we find a positive intergenerational effect: improvement in women's education leads to improvement in daughter's education. Second, we find that the magnitude of intergenerational effects substantially depends on assortative marriage and differential fertility: assortative mating amplifies and differential fertility dampens the intergenerational effects. Third, intergenerational effects become bigger for the less educated and smaller for the better educated over time, which is a consequence of educational expansion. We compare our results with Mare and Maralani's original Indonesian study to illustrate how the model of intergenerational effects works in different socioeconomic circumstances. PMID:23017970
INTEGRATED CHEMICAL INFORMATION TECHNOLOGIES ...
A central regulatory mandate of the Environmental Protection Agency, spanning many Program Offices and issues, is to assess the potential health and environmental risks of large numbers of chemicals released into the environment, often in the absence of relevant test data. Models for predicting potential adverse effects of chemicals based primarily on chemical structure play a central role in prioritization and screening strategies yet are highly dependent and conditional upon the data used for developing such models. Hence, limits on data quantity, quality, and availability are considered by many to be the largest hurdles to improving prediction models in diverse areas of toxicology. Generation of new toxicity data for additional chemicals and endpoints, development of new high-throughput, mechanistically relevant bioassays, and increased generation of genomics and proteomics data that can clarify relevant mechanisms will all play important roles in improving future SAR prediction models. The potential for much greater immediate gains, across large domains of chemical and toxicity space, comes from maximizing the ability to mine and model useful information from existing toxicity data, data that represent huge past investment in research and testing expenditures. In addition, the ability to place newer “omics” data, data that potentially span many possible domains of toxicological effects, in the broader context of historical data is the means for opti
Sullivan, Cris M
2018-01-01
Domestic violence (DV) victim service programs have been increasingly expected by legislators and funders to demonstrate that they are making a significant difference in the lives of those using their services. Alongside this expectation, they are being asked to describe the Theory of Change guiding how they believe their practices lead to positive results for survivors and their children. Having a widely accepted conceptual model is not just potentially useful to funders and policy makers as they help shape policy and practice -- it can also help programs continually reflect upon and improve their work. This paper describes the iterative and collaborative process undertaken to generate a conceptual model describing how DV victim services are expected to improve survivors' lives. The Social and Emotional Well-Being Framework guiding the model is an ideal structure to use to describe the goals and practices of DV programs because this framework: (1) accurately represents DV programs' goal of helping survivors and their children thrive; and (2) recognizes the importance of community, social, and societal context in influencing individuals' social and emotional well-being. The model was designed to guide practice and to generate new questions for research and evaluation that address individual, community, and systems factors that promote or hinder survivor safety and well-being.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Memory availability and referential access
Johns, Clinton L.; Gordon, Peter C.; Long, Debra L.; Swaab, Tamara Y.
2013-01-01
Most theories of coreference specify linguistic factors that modulate antecedent accessibility in memory; however, whether non-linguistic factors also affect coreferential access is unknown. Here we examined the impact of a non-linguistic generation task (letter transposition) on the repeated-name penalty, a processing difficulty observed when coreferential repeated names refer to syntactically prominent (and thus more accessible) antecedents. In Experiment 1, generation improved online (event-related potentials) and offline (recognition memory) accessibility of names in word lists. In Experiment 2, we manipulated generation and syntactic prominence of antecedent names in sentences; both improved online and offline accessibility, but only syntactic prominence elicited a repeated-name penalty. Our results have three important implications: first, the form of a referential expression interacts with an antecedent’s status in the discourse model during coreference; second, availability in memory and referential accessibility are separable; and finally, theories of coreference must better integrate known properties of the human memory system. PMID:24443621
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
Ding, Zhikun; Yi, Guizhen; Tam, Vivian W Y; Huang, Tengyue
2016-05-01
A huge amount of construction waste has been generated from increasingly higher number of construction activities than in the past, which has significant negative impacts on the environment if they are not properly managed. Therefore, effective construction waste management is of primary importance for future sustainable development. Based on the theory of planned behaviors, this paper develops a system dynamic model of construction waste reduction management at the construction phase to simulate the environmental benefits of construction waste reduction management. The application of the proposed model is shown using a case study in Shenzhen, China. Vensim is applied to simulate and analyze the model. The simulation results indicate that source reduction is an effective waste reduction measure which can reduce 27.05% of the total waste generation. Sorting behaviors are a premise for improving the construction waste recycling and reuse rates which account for 15.49% of the total waste generated. The environmental benefits of source reduction outweigh those of sorting behaviors. Therefore, to achieve better environmental performance of the construction waste reduction management, attention should be paid to source reduction such as low waste technologies and on-site management performance. In the meantime, sorting behaviors encouragement such as improving stakeholders' waste awareness, refining regulations, strengthening government supervision and controlling illegal dumping should be emphasized. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cullinane Thomas, Catherine; Huber, Christopher C.; Koontz, Lynne
2014-01-01
This 2012 analysis marks a major revision to the NPS visitor spending effects analyses, with the development of a new visitor spending effects model (VSE model) that replaces the former Money Generation Model (MGM2). Many of the hallmarks and processes of the MGM2 model are preserved in the new VSE model, but the new model makes significant strides in improving the accuracy and transparency of the analysis. Because of this change from the MGM2 model to the VSE model, estimates from this year’s analysis are not directly comparable to previous analyses.
NASA Astrophysics Data System (ADS)
Gu, Shengfeng; Shi, Chuang; Lou, Yidong; Liu, Jingnan
2015-05-01
Zero-difference (ZD) ambiguity resolution (AR) reveals the potential to further improve the performance of precise point positioning (PPP). Traditionally, PPP AR is achieved by Melbourne-Wübbena and ionosphere-free combinations in which the ionosphere effect are removed. To exploit the ionosphere characteristics, PPP AR with L1 and L2 raw observable has also been developed recently. In this study, we apply this new approach in uncalibrated phase delay (UPD) generation and ZD AR and compare it with the traditional model. The raw observable processing strategy treats each ionosphere delay as an unknown parameter. In this manner, both a priori ionosphere correction model and its spatio-temporal correlation can be employed as constraints to improve the ambiguity resolution. However, theoretical analysis indicates that for the wide-lane (WL) UPD retrieved from L1/L2 ambiguities to benefit from this raw observable approach, high precision ionosphere correction of better than 0.7 total electron content unit (TECU) is essential. This conclusion is then confirmed with over 1 year data collected at about 360 stations. Firstly, both global and regional ionosphere model were generated and evaluated, the results of which demonstrated that, for large-scale ionosphere modeling, only an accuracy of 3.9 TECU can be achieved on average for the vertical delays, and this accuracy can be improved to about 0.64 TECU when dense network is involved. Based on these ionosphere products, WL/narrow-lane (NL) UPDs are then extracted with the raw observable model. The NL ambiguity reveals a better stability and consistency compared to traditional approach. Nonetheless, the WL ambiguity can be hardly improved even constrained with the high spatio-temporal resolution ionospheric corrections. By applying both these approaches in PPP-RTK, it is interesting to find that the traditional model is more efficient in AR as evidenced by the shorter time to first fix, while the three-dimensional positioning accuracy of the RAW model outperforms the combination model by about . This reveals that, with the current ionosphere models, there is actually no optimal strategy for the dual-frequency ZD ambiguity resolution, and the combination approach and raw approach each has merits and demerits.
Research on Mechanism and Model of Centralized Bidding for Pumped Storage Power in Shanghai
NASA Astrophysics Data System (ADS)
Hua, Zhong; Ying, Zhiwei; Lv, Zhengyu; Jianlin, Yang; Huang, Yupeng; Li, Dong
2017-05-01
China is now in the transition stage toward power market and in some specific area, market approach has already been adopted to improve the overall efficiency. In this paper, Bidding and trading modes of pumped storage energy in various regions of China are analysed. Based on the constraints of bidding price and electricity, as well as the system power flow, the trading model is established to collect the capacity cost of pumped storage energy in Shanghai. With the trading model proposed, that the generators who actively undertake the capacity cost of pumped storage energy and bid enough electricity with lower price can be rewarded, while those attempts to conspire and manipulate the market will be penalized. Finally, using seven generators in Shanghai as examples to simulate the market operation, the effectiveness of the proposed model is verified.
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Favre, Anne-Catherine; Hingray, Benoit
2018-02-01
We present a multi-site stochastic model for the generation of average daily temperature, which includes a flexible parametric distribution and a multivariate autoregressive process. Different versions of this model are applied to a set of 26 stations located in Switzerland. The importance of specific statistical characteristics of the model (seasonality, marginal distributions of standardized temperature, spatial and temporal dependence) is discussed. In particular, the proposed marginal distribution is shown to improve the reproduction of extreme temperatures (minima and maxima). We also demonstrate that the frequency and duration of cold spells and heat waves are dramatically underestimated when the autocorrelation of temperature is not taken into account in the model. An adequate representation of these characteristics can be crucial depending on the field of application, and we discuss potential implications in different contexts (agriculture, forestry, hydrology, human health).
2012-09-30
improving forecast performance over cloudy regions using the Ozone Monitoring Instrument (OMI) Aerosol Index; and 2) preparing for the post-MODIS...meteorological fields, the International Geosphere-Biosphere Programme (IGBP) SW and LW surface characteristics, and an ozone climatology are used as...The primary impact of CALIOP assimilation on the model is the redistribution of mass toward the boundary layer from the free troposphere . For high
Action research and millennials: Improving pedagogical approaches to encourage critical thinking.
Erlam, Gwen; Smythe, Liz; Wright-St Clair, Valerie
2018-02-01
This article examines the effects of intergenerational diversity on pedagogical practice in nursing education. While generational cohorts are not entirely homogenous, certain generational features do emerge. These features may require alternative approaches in educational design in order to maximize learning for millennial students. Action research is employed with undergraduate millennial nursing students (n=161) who are co-researchers in that they are asked for changes in current simulation environments which will improve their learning in the areas of knowledge acquisition, skill development, critical thinking, and communication. These changes are put into place and a re-evaluation of the effectiveness of simulation progresses through three action cycles. Millennials, due to a tendency for risk aversion, may gravitate towards more supportive learning environments which allow for free access to educators. This tendency is mitigated by the educator modeling expected behaviors, followed by student opportunity to repeat the behavior. Millennials tend to prefer to work in teams, see tangible improvement, and employ strategies to improve inter-professional communication. This research highlights the need for nurse educators working in simulation to engage in critical discourse regarding the adequacy and effectiveness of current pedagogy informing simulation design. Pedagogical approaches which maximize repetition, modeling, immersive feedback, and effective communication tend to be favored by millennial students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Electric Grid Expansion Planning with High Levels of Variable Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W.; You, Shutang; Shankar, Mallikarjun
2016-02-01
Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind powermore » across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance in the EI system. Incorporating more details of renewables in expansion planning will inevitably increase the computational burden. Therefore, high performance computing (HPC) techniques are urgently needed for power system operation and planning optimization. As a scoping study task, this project tested some preliminary parallel computation techniques such as breaking down the simulation task into several sub-tasks based on chronology splitting or sample splitting, and then assigning these sub-tasks to different cores. Testing results show significant time reduction when a simulation task is split into several sub-tasks for parallel execution.« less
Tu, Xiongbing; Li, Zhihong; Wang, Jie; Huang, Xunbing; Yang, Jiwen; Fan, Chunbin; Wu, Huihui; Wang, Qinglei; Zhang, Zehua
2014-01-01
The degree-day (DD) model is an important tool for forecasting pest phenology and voltinism. Unfortunately, the DD model is inaccurate, as is the case for the Oriental migratory locust. To improve the existing DD model for this pest, we first studied locust development in seven growth chambers, each of which simulated the complete growing-season climate of a specific region in China (Baiquan, Chengde, Tumotezuoqi, Wenan, Rongan, Qiongzhong, or Qiongshan). In these seven treatments, locusts completed 0.95, 1, 1.1, 2.2, 2.95, 3.95, and 4.95 generations, respectively. Hence, in the Baiquan (700), Rongan (2400), Qiongzhong (3200), and Qiongshan (2400) treatments, the final generation were unable to lay eggs. In a second experiment, we reared locusts for a full generation in growth chambers, at different constant temperatures. This experiment provided two important findings. First, temperatures between 32 and 42°C did not influence locust development rate. Hence, the additional heat provided by temperatures above 32°C did not add to the total heat units acquired by the insects, according to the traditional DD model. Instead, temperatures above 32°C represent overflow heat, and can not be included when calculating total heat acquired during development. We also noted that females raised at constant 21°C failed to oviposit. Hence, temperatures lower than 21°C should be deducted when calculating total heat acquired during adult development. Using our experimental findings, we next micmiked 24-h temperature curve and constructed a new DD model based on a 24-h temperature integral calculation. We then compared our new model with the traditional DD model, results showed the DD deviation was 166 heat units in Langfang during 2011. At last we recalculated the heat by our new DD model, which better predicted the results from our first growth chamber experiment. PMID:24599091
Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price
NASA Astrophysics Data System (ADS)
Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.
2018-02-01
Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.
NASA Astrophysics Data System (ADS)
Faria, J. M.; Mahomad, S.; Silva, N.
2009-05-01
The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.
DOT National Transportation Integrated Search
1981-09-01
Measurement of wheel/rail characteristics generates information for improvement of design tools such as model validation, establishment of load spectra and vehicle/track system interaction. Existing and new designs are assessed from evaluation of veh...
Jastram, John D.; Moyer, Douglas; Hyer, Kenneth
2009-01-01
Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.
Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley J.; Frew, Bethany A.; Mai, Trieu T.
Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Powermore » Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.« less
Development of a High Resolution 3D Infant Stomach Model for Surgical Planning
NASA Astrophysics Data System (ADS)
Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.
Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.
NASA Astrophysics Data System (ADS)
Lee, Soon Hwan; Kim, Ji Sun; Lee, Kang Yeol; Shon, Keon Tae
2017-04-01
Air quality due to increasing Particulate Matter(PM) in Korea in Asia is getting worse. At present, the PM forecast is announced based on the PM concentration predicted from the air quality prediction numerical model. However, forecast accuracy is not as high as expected due to various uncertainties for PM physical and chemical characteristics. The purpose of this study was to develop a numerical-statistically ensemble models to improve the accuracy of prediction of PM10 concentration. Numerical models used in this study are the three dimensional atmospheric model Weather Research and Forecasting(WRF) and the community multiscale air quality model (CMAQ). The target areas for the PM forecast are Seoul, Busan, Daegu, and Daejeon metropolitan areas in Korea. The data used in the model development are PM concentration and CMAQ predictions and the data period is 3 months (March 1 - May 31, 2014). The dynamic-statistical technics for reducing the systematic error of the CMAQ predictions was applied to the dynamic linear model(DLM) based on the Baysian Kalman filter technic. As a result of applying the metrics generated from the dynamic linear model to the forecasting of PM concentrations accuracy was improved. Especially, at the high PM concentration where the damage is relatively large, excellent improvement results are shown.
Generation of High Resolution Land Surface Parameters in the Community Land Model
NASA Astrophysics Data System (ADS)
Ke, Y.; Coleman, A. M.; Wigmosta, M. S.; Leung, L.; Huang, M.; Li, H.
2010-12-01
The Community Land Model (CLM) is the land surface model used for the Community Atmosphere Model (CAM) and the Community Climate System Model (CCSM). It examines the physical, chemical, and biological processes across a variety of spatial and temporal scales. Currently, efforts are being made to improve the spatial resolution of the CLM, in part, to represent finer scale hydrologic characteristics. Current land surface parameters of CLM4.0, in particular plant functional types (PFT) and leaf area index (LAI), are generated from MODIS and calculated at a 0.05 degree resolution. These MODIS-derived land surface parameters have also been aggregated to coarser resolutions (e.g., 0.5, 1.0 degrees). To evaluate the response of CLM across various spatial scales, higher spatial resolution land surface parameters need to be generated. In this study we examine the use of Landsat TM/ETM+ imagery and data fusion techniques for generating land surface parameters at a 1km resolution within the Pacific Northwest United States. . Land cover types and PFTs are classified based on Landsat multi-season spectral information, DEM, National Land Cover Database (NLCD) and the USDA-NASS Crop Data Layer (CDL). For each PFT, relationships between MOD15A2 high quality LAI values, Landsat-based vegetation indices, climate variables, terrain, and laser-altimeter derived vegetation height are used to generate monthly LAI values at a 30m resolution. The high-resolution PFT and LAI data are aggregated to create a 1km model grid resolution. An evaluation and comparison of CLM land surface response at both fine and moderate scale is presented.
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
Dynamical aspects of behavior generation under constraints
Harter, Derek; Achunala, Srinivas
2007-01-01
Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514
Modelling the impacts of pests and diseases on agricultural systems.
Donatelli, M; Magarey, R D; Bregaglio, S; Willocquet, L; Whish, J P M; Savary, S
2017-07-01
The improvement and application of pest and disease models to analyse and predict yield losses including those due to climate change is still a challenge for the scientific community. Applied modelling of crop diseases and pests has mostly targeted the development of support capabilities to schedule scouting or pesticide applications. There is a need for research to both broaden the scope and evaluate the capabilities of pest and disease models. Key research questions not only involve the assessment of the potential effects of climate change on known pathosystems, but also on new pathogens which could alter the (still incompletely documented) impacts of pests and diseases on agricultural systems. Yield loss data collected in various current environments may no longer represent a adequate reference to develop tactical, decision-oriented, models for plant diseases and pests and their impacts, because of the ongoing changes in climate patterns. Process-based agricultural simulation modelling, on the other hand, appears to represent a viable methodology to estimate the impacts of these potential effects. A new generation of tools based on state-of-the-art knowledge and technologies is needed to allow systems analysis including key processes and their dynamics over appropriate suitable range of environmental variables. This paper offers a brief overview of the current state of development in coupling pest and disease models to crop models, and discusses technical and scientific challenges. We propose a five-stage roadmap to improve the simulation of the impacts caused by plant diseases and pests; i) improve the quality and availability of data for model inputs; ii) improve the quality and availability of data for model evaluation; iii) improve the integration with crop models; iv) improve the processes for model evaluation; and v) develop a community of plant pest and disease modelers.
More-Realistic Digital Modeling of a Human Body
NASA Technical Reports Server (NTRS)
Rogge, Renee
2010-01-01
A MATLAB computer program has been written to enable improved (relative to an older program) modeling of a human body for purposes of designing space suits and other hardware with which an astronaut must interact. The older program implements a kinematic model based on traditional anthropometric measurements that do provide important volume and surface information. The present program generates a three-dimensional (3D) whole-body model from 3D body-scan data. The program utilizes thin-plate spline theory to reposition the model without need for additional scans.
SU-F-T-447: The Impact of Treatment Planning Methods On RapidPlan Modeling for Rectum Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S; Peng, J; Li, K
2016-06-15
Purpose: To investigate the dose volume histogram (DVH) prediction varieties based on intensity modulate radiotherapy (IMRT) plan or volume arc modulate radiotherapy (VMAT) plan models on the RapidPlan. Methods: Two DVH prediction models were generated in this study, including an IMRT model trained from 83 IMRT rectum plans and a VMAT model trained from 60 VMAT rectum plans. In the internal validation, 20 plans from each training database were selected to verify the clinical feasibility of the model. Then, 10 IMRT plans (PIMRT-by-IMRT-model) generated from IMRT model and 10 IMRT plans generated from VMAT model (PIMRT-by-VMAT-model) were compared on themore » dose to organs at risk (OAR), which included bladder, left and right femoral heads. The similar comparison was also performed on the VMAT plans generated from IMRT model (PVMAT-by-IMRT-model) and VMAT plans generated from VMAT (PVMAT-by-VMAT-model) model. Results: For the internal validation, all plans from IMRT or VMAT model shows significantly improvement on OAR sparing compared with the corresponded clinical ones. Compared to the PIMRT-by-VMAT-model, the PIMRT-by-IMRT-model has a reduction of 6.90±3.87%(p<0.001) on V40 6.63±3.62%(p<0.001) on V45 and 4.74±2.26%(p<0.001) on V50 in bladder; and a mean dose reduction of 2.12±1.75Gy(p=0.004) and 2.84±1.53Gy(p<0.001) in right and left femoral head, respectively. There was no significant difference on OAR sparing between PVMAT-by-IMRT-model and PVMAT-by-VMAT-model. Conclusion: The IMRT model for the rectal cancer in the RapidPlan can be applied to for VMAT planning. However, the VMAT model is not suggested to use in the IMRT planning. Cautions should be taken that the planning model based on some technique may not feasible to other planning techniques.« less
Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.
Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod
2017-07-15
There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal flexibility among the three networks. Our proposed methods provide a novel and powerful generative model for investigating dynamic brain connectivity. Copyright © 2017 Elsevier Inc. All rights reserved.
Sharma, Divya; Singh, Mahendra Pratap; Vimal, Divya; Kumar, Saurabh; Jha, Rakesh Roshan; Chowdhuri, D Kar
2018-06-01
Adaptive behaviour of an organism has relevance towards developing better resistance in subsequent generations following xenobiotic exposures. Using a genetically tractable and functional insect model, Drosophila melanogaster, we aimed to examine the resistance of the organism against repeated exposures of benzene, an industrial and environmental-chemical and a class I human carcinogen. While 100 mM benzene exposure to one-day old flies for seven days caused ∼95% mortality (F0), its exposure to subsequent generations of flies led a significant decrease in mortality with maximum survival (∼85%) as evident at F28 generation. While burden of benzene and its toxic metabolites was higher in initial generations, in latter generations (F24-F28), concentrations of less toxic metabolites were higher. In parallel, improved metabolism, less oxidative stress, less induction of hsp60 and hsp70 and higher induction of hsp26 and hsp27 along with increased gene dose ratio of three genes (cyp6g1, mrp1, and cyp12d1) were observed in latter generations of benzene exposed flies with maximum benefit accrued in F28 generation. The resistance developed in flies of F28 generation had a negative impact on reproduction which might be due to a cost against selection. The study demonstrates development of benzene resistance in Drosophila with permanent genetic changes. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurwitz, M; Williams, C; Dhou, S
Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion ofmore » the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206.« less
Modeling of forest canopy BRDF using DIRSIG
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan; Schott, John R.
2016-05-01
The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.
Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto
2012-05-25
In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.
Analysis of the three-dimensional structure of a bubble wake using PIV and Galilean decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Y.A.; Schmidl, W.D.; Ortiz-Villafuerte, J.
1999-07-01
Bubbly flow plays a key role in a variety of natural and industrial processes. An accurate and complete description of the phase interactions in two-phase bubbly flow is not available at this time. These phase interactions are, in general, always three-dimensional and unsteady. Therefore, measurement techniques utilized to obtain qualitative and quantitative data from two-phase flow should be able to acquire transient and three-dimensional data, in order to provide information to test theoretical models and numerical simulations. Even for dilute bubble flows, in which bubble interaction is at a minimum, the turbulent motion of the liquid generated by the bubblemore » is yet to be completely understood. For many years, the design of systems with bubbly flows was based primarily on empiricism. Dilute bubbly flows are an extension of single bubble dynamics, and therefore improvements in the description and modeling of single bubble motion, the flow field around the bubble, and the dynamical interactions between the bubble and the flow will consequently improve bubbly flow modeling. The improved understanding of the physical phenomena will have far-reaching benefits in upgrading the operation and efficiency of current processes and in supporting the development of new and innovative approaches. A stereoscopic particle image velocimetry measurement of the flow generated by the passage of a single air-bubble rising in stagnant water, in a circular pipe is presented. Three-dimensional velocity fields within the measurement zone were obtained. Ensemble-averaged instantaneous velocities for a specific bubble path were calculated and interpolated to obtain mean three-dimensional velocity fields. A Galilean velocity decomposition is used to study the vorticity generated in the flow.« less
Capalbo, Susan M; Antle, John M; Seavert, Clark
2017-07-01
Research on next generation agricultural systems models shows that the most important current limitation is data, both for on-farm decision support and for research investment and policy decision making. One of the greatest data challenges is to obtain reliable data on farm management decision making, both for current conditions and under scenarios of changed bio-physical and socio-economic conditions. This paper presents a framework for the use of farm-level and landscape-scale models and data to provide analysis that could be used in NextGen knowledge products, such as mobile applications or personal computer data analysis and visualization software. We describe two analytical tools - AgBiz Logic and TOA-MD - that demonstrate the current capability of farmlevel and landscape-scale models. The use of these tools is explored with a case study of an oilseed crop, Camelina sativa , which could be used to produce jet aviation fuel. We conclude with a discussion of innovations needed to facilitate the use of farm and policy-level models to generate data and analysis for improved knowledge products.
Eichhorn, Stefan; Spindler, Johannes; Polski, Marcin; Mendoza, Alejandro; Schreiber, Ulrich; Heller, Michael; Deutsch, Marcus Andre; Braun, Christian; Lange, Rüdiger; Krane, Markus
2017-05-01
Investigations of compressive frequency, duty cycle, or waveform during CPR are typically rooted in animal research or computer simulations. Our goal was to generate a mechanical model incorporating alternate stiffness settings and an integrated blood flow system, enabling defined, reproducible comparisons of CPR efficacy. Based on thoracic stiffness data measured in human cadavers, such a model was constructed using valve-controlled pneumatic pistons and an artificial heart. This model offers two realistic levels of chest elasticity, with a blood flow apparatus that reflects compressive depth and waveform changes. We conducted CPR at opposing levels of physiologic stiffness, using a LUCAS device, a motor-driven plunger, and a group of volunteers. In high-stiffness mode, blood flow generated by volunteers was significantly less after just 2min of CPR, whereas flow generated by LUCAS device was superior by comparison. Optimal blood flow was obtained via motor-driven plunger, with trapezoidal waveform. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Ultrafast third-harmonic generation from textured aluminum nitride-sapphire interfaces
NASA Astrophysics Data System (ADS)
Stoker, D. S.; Baek, J.; Wang, W.; Kovar, D.; Becker, M. F.; Keto, J. W.
2006-05-01
We measured and modeled third-harmonic generation (THG) from an AlN thin film on sapphire using a time-domain approach appropriate for ultrafast lasers. Second-harmonic measurements indicated that polycrystalline AlN contains long-range crystal texture. An interface model for third-harmonic generation enabled an analytical representation of scanning THG ( z -scan) experiments. Using it and accounting for Fresnel reflections, we measured the AlN -sapphire susceptibility ratio and estimated the susceptibility for aluminum nitride, χxxxx(3)(3ω;ω,ω,ω)=1.52±0.25×10-13esu . The third-harmonic (TH) spectrum strongly depended on the laser focus position and sample thickness. The amplitude and phase of the frequency-domain interference were fit to the Fourier transform of the calculated time-domain field to improve the accuracy of several experimental parameters. We verified that the model works well for explaining TH signal amplitudes and spectral phase. Some anomalous features in the TH spectrum were observed, which we attributed to nonparaxial effects.
Mine Burial Expert System for Change of MIW Doctrine
2011-09-01
allowed the mine to move vertically and horizontally, as well as rotate about the y axis. The first of these second generation impact models was...bearing strength and use multilayered sediments. Although they improve the knowledge of mine movement in two dimensions and rotation in one direction...conditional independence. Bayesian networks were originally developed 24 to handle uncertainty in a quantitative manner. They are statistical models
Modeling Spin Creation and Mass Generation in the Electron Motivated by an Angle Doubler Mechanism
2017-11-01
electricity. The second author and project manager (W. Liu), suggested the use of an angle doubler mechanism to improve the length of the movements...to present an attempt to explain this effect using geometry. The model is extended by projective geometry, which provides a deeper understanding of...LANGUAGE OF PROJECTIVE GEOMETRY ...................21 10. CONNECTION WITH MATHEMATICAL PHYSICS
Acquisition Community Team Dynamics: The Tuckman Model vs. the DAU Model
2007-04-30
courses . These student teams are used to enable the generation of more complex products and to prepare the students for the ...requirement for stage discreteness was met, I developed a stage-separation test that, when applied to the data representing the experience of a... test the reliability, and validate an improved questionnaire instrument that: – Redefines “Storming” with new storming questions Less focused
Next-Generation NATO Reference Mobility Model (NG-NRMM)
2016-05-11
facilitate comparisons between vehicle design candidates and to assess the mobility of existing vehicles under specific scenarios. Although NRMM has...of different deployed platforms in different areas of operation and routes Improved flexibility as a design and procurement support tool through...Element Method DEM Digital Elevation Model DIL Driver in the Loop DP Drawbar Pull Force DOE Design of Experiments DTED Digital Terrain Elevation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, B; Peter, D; Covellone, B
2009-07-02
Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less
Use of 3D models of vascular rings and slings to improve resident education.
Jones, Trahern W; Seckeler, Michael D
2017-09-01
Three-dimensional (3D) printing is a manufacturing method by which an object is created in an additive process, and can be used with medical imaging data to generate accurate physical reproductions of organs and tissues for a variety of applications. We hypothesized that using 3D printed models of congenital cardiovascular lesions to supplement an educational lecture would improve learners' scores on a board-style examination. Patients with normal and abnormal aortic arches were selected and anonymized to generate 3D printed models. A cohort of pediatric and combined pediatric/emergency medicine residents were then randomized to intervention and control groups. Each participant was given a subjective survey and an objective board-style pretest. Each group received the same 20-minutes lecture on vascular rings and slings. During the intervention group's lecture, 3D printed physical models of each lesion were distributed for inspection. After each lecture, both groups completed the same subjective survey and objective board-style test to assess their comfort with and postlecture knowledge of vascular rings. There were no differences in the basic demographics of the two groups. After the lectures, both groups' subjective comfort levels increased. Both groups' scores on the objective test improved, but the intervention group scored higher on the posttest. This study demonstrated a measurable gain in knowledge about vascular rings and pulmonary artery slings with the addition of 3D printed models of the defects. Future applications of this teaching modality could extend to other congenital cardiac lesions and different learners. © 2017 Wiley Periodicals, Inc.
Protein (multi-)location prediction: utilizing interdependencies via a generative model
Shatkay, Hagit
2015-01-01
Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505
Protein (multi-)location prediction: utilizing interdependencies via a generative model.
Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit
2015-06-15
Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.
Temporal Subtraction of Digital Breast Tomosynthesis Images for Improved Mass Detection
2008-10-01
K. Fishman and B. M. W. Tsui, "Development of a computer-generated model for the coronary arterial tree based on multislice CT and morphometric data...mathematical models based on geometric primitives8-22. Bakic et al created synthetic x-ray mammograms using a 3D simulated breast tissue model consisting of...utilized a combination of voxel matrices and geometric primitives to create a breast phantom that includes the breast surface, the duct system, and
Modeling multilayer x-ray reflectivity using genetic algorithms
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.
2000-06-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Iterative refinement of implicit boundary models for improved geological feature reproduction
NASA Astrophysics Data System (ADS)
Martin, Ryan; Boisvert, Jeff B.
2017-12-01
Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.
Next Generation NASA Initiative for Space Geodesy
NASA Technical Reports Server (NTRS)
Merkowitz, S. M.; Desai, S.; Gross, R. S.; Hilliard, L.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry J. F.; Murphy, D.; Noll, C. E.;
2012-01-01
Space geodesy measurement requirements have become more and more stringent as our understanding of the physical processes and our modeling techniques have improved. In addition, current and future spacecraft will have ever-increasing measurement capability and will lead to increasingly sophisticated models of changes in the Earth system. Ground-based space geodesy networks with enhanced measurement capability will be essential to meeting these oncoming requirements and properly interpreting the sate1!ite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation ofthe observed geophysical signals. These requirements have been articulated by the Global Geodetic Observing System (GGOS). The NASA Space Geodesy Project (SGP) is developing a prototype core site as the basis for a next generation Space Geodetic Network (SGN) that would be NASA's contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Each of the sites in the SGN would include co-located, state of-the-art systems from all four space geodetic observing techniques (GNSS, SLR, VLBI, and DORIS). The prototype core site is being developed at NASA's Geophysical and Astronomical Observatory at Goddard Space Flight Center. The project commenced in 2011 and is scheduled for completion in late 2013. In January 2012, two multiconstellation GNSS receivers, GODS and GODN, were established at the prototype site as part of the local geodetic network. Development and testing are also underway on the next generation SLR and VLBI systems along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vector ties, and network design studies are being performed to define the appropriate number and distribution of these next generation space geodetic core sites that are required to achieve the driving ITRF requirements. We present the status of this prototype next generation space geodetic core site, results from the analysis of data from the established geodetic stations, and results from the ongoing network design studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, Cathy
2014-04-30
This report contains the results from research aimed at improving short-range (0-6 hour) hub-height wind forecasts in the NOAA weather forecast models through additional data assimilation and model physics improvements for use in wind energy forecasting. Additional meteorological observing platforms including wind profilers, sodars, and surface stations were deployed for this study by NOAA and DOE, and additional meteorological data at or near wind turbine hub height were provided by South Dakota State University and WindLogics/NextEra Energy Resources over a large geographical area in the U.S. Northern Plains for assimilation into NOAA research weather forecast models. The resulting improvements inmore » wind energy forecasts based on the research weather forecast models (with the additional data assimilation and model physics improvements) were examined in many different ways and compared with wind energy forecasts based on the current operational weather forecast models to quantify the forecast improvements important to power grid system operators and wind plant owners/operators participating in energy markets. Two operational weather forecast models (OP_RUC, OP_RAP) and two research weather forecast models (ESRL_RAP, HRRR) were used as the base wind forecasts for generating several different wind power forecasts for the NextEra Energy wind plants in the study area. Power forecasts were generated from the wind forecasts in a variety of ways, from very simple to quite sophisticated, as they might be used by a wide range of both general users and commercial wind energy forecast vendors. The error characteristics of each of these types of forecasts were examined and quantified using bulk error statistics for both the local wind plant and the system aggregate forecasts. The wind power forecast accuracy was also evaluated separately for high-impact wind energy ramp events. The overall bulk error statistics calculated over the first six hours of the forecasts at both the individual wind plant and at the system-wide aggregate level over the one year study period showed that the research weather model-based power forecasts (all types) had lower overall error rates than the current operational weather model-based power forecasts, both at the individual wind plant level and at the system aggregate level. The bulk error statistics of the various model-based power forecasts were also calculated by season and model runtime/forecast hour as power system operations are more sensitive to wind energy forecast errors during certain times of year and certain times of day. The results showed that there were significant differences in seasonal forecast errors between the various model-based power forecasts. The results from the analysis of the various wind power forecast errors by model runtime and forecast hour showed that the forecast errors were largest during the times of day that have increased significance to power system operators (the overnight hours and the morning/evening boundary layer transition periods), but the research weather model-based power forecasts showed improvement over the operational weather model-based power forecasts at these times.« less
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi
2016-08-24
Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Gobrogge, Eric; Fu, Li
Here we show that the addition of chemical interactions of soluble polysaccharides with a surfactant monolayer improves agreement of modeled sea spray chemistry with observed marine aerosol chemistry. In particular, the fraction of hydroxyl functional groups in modeled sea spray organic matter is increased, improving agreement with FTIR observations of marine aerosol composition. The overall organic fraction of submicron sea spray also increases, allowing organic mass fractions in the range 0.5 – 0.7 for submicron sea spray particles over highly active phytoplankton blooms. We show results from Sum Frequency Generation (SFG) experiments that support the modeling approach, by demonstrating thatmore » soluble polysaccharides can strongly adsorb to a lipid monolayer via columbic interactions under appropriate conditions.« less
Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.
2008-01-01
The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.
NASA Astrophysics Data System (ADS)
Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin
1998-11-01
Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.
Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diakov, Victor; Cole, Wesley; Sullivan, Patrick
2015-11-01
Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validitymore » of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.« less
Energy Economics of Farm Biogas in Cold Climates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pillay, Pragasen; Grimberg, Stefan; Powers, Susan E
Anaerobic digestion of farm and dairy waste has been shown to be capital intensive. One way to improve digester economics is to co-digest high-energy substrates together with the dairy manure. Cheese whey for example represents a high-energy substrate that is generated during cheese manufacture. There are currently no quantitative tools available that predict performance of co-digestion farm systems. The goal of this project was to develop a mathematical tool that would (1) predict the impact of co-digestion and (2) determine the best use of the generated biogas for a cheese manufacturing plant. Two models were developed that separately could bemore » used to meet both goals of the project. Given current pricing structures of the most economical use of the generated biogas at the cheese manufacturing plant was as a replacement of fuel oil to generate heat. The developed digester model accurately predicted the performance of 26 farm digesters operating in the North Eastern U.S.« less
Hydrodynamic aspects of thrust generation in gymnotiform swimming
NASA Astrophysics Data System (ADS)
Shirgaonkar, Anup A.; Curet, Oscar M.; Patankar, Neelesh A.; Maciver, Malcolm A.
2008-11-01
The primary propulsor in gymnotiform swimmers is a fin running along most of the ventral midline of the fish. The fish propagates traveling waves along this ribbon fin to generate thrust. This unique mode of thrust generation gives these weakly electric fish great maneuverability cluttered spaces. To understand the mechanical basis of gymnotiform propulsion, we investigated the hydrodynamics of a model ribbon-fin of an adult black ghost knifefish using high-resolution numerical experiments. We found that the principal mechanism of thrust generation is a central jet imparting momentum to the fluid with associated vortex rings near the free edge of the fin. The high-fidelity simulations also reveal secondary vortex rings potentially useful in rapid sideways maneuvers. We obtained the scaling of thrust with respect to the traveling wave kinematic parameters. Using a fin-plate model for a fish, we also discuss improvements to Lighthill's inviscid theory for gymnotiform and balistiform modes in terms of thrust magnitude, viscous drag on the body, and momentum enhancement.
New and Improved GLDAS Data Sets and Data Services at NASA GES DISC
NASA Technical Reports Server (NTRS)
Rui, Hualan; Beaudoing, Hiroko; Teng, William; Vollmer, Bruce; Rodell, Matthew; Lei, Guang-Dih
2012-01-01
The goal of a Land Data Assimilation System (LDAS) is to ingest satellite- and ground-based observational data products, using advanced land surface modeling and data assimilation techniques, in order to generate optimal fields of land surface states and fluxes data and, thereby, facilitate hydrology and climate modeling, research, and forecast. With the motivation of creating more climatologically consistent data sets, NASA GSFC's Hydrological Sciences Laboratory has generated more than 60 years (Jan. 1948-- Dec. 2008) of Global LDAS Version 2 (GLDAS-2) data, by using the Princeton Forcing Data Set and upgraded versions of Land Surface Models (LSMs). GLDAS data and data services are provided at NASA GES DISC Hydrology Data and Information Services Center (HDISC), in collaboration with HSL and LDAS.
Microburst vertical wind estimation from horizontal wind measurements
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1994-01-01
The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Controlled generation of large volumes of atmospheric clouds in a ground-based environmental chamber
NASA Technical Reports Server (NTRS)
Hettel, H. J.; Depena, R. G.; Pena, J. A.
1975-01-01
Atmospheric clouds were generated in a 23,000 cubic meter environmental chamber as the first step in a two part study on the effects of contaminants on cloud formation. The generation procedure was modeled on the terrestrial generation mechanism so that naturally occurring microphysics mechanisms were operative in the cloud generation process. Temperature, altitude, liquid water content, and convective updraft velocity could be selected independently over the range of terrestrially realizable clouds. To provide cloud stability, a cotton muslin cylinder 29.3 meters in diameter and 24.2 meters high was erected within the chamber and continuously wetted with water at precisely the same temperature as the cloud. The improved instrumentation which permitted fast, precise, and continual measurements of cloud temperature and liquid water content is described.
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A