Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs
2014-06-01
comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree
Updated Model of the Solar Energetic Proton Environment in Space
NASA Astrophysics Data System (ADS)
Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami
2018-05-01
The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).
NASA Astrophysics Data System (ADS)
Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald
2017-04-01
The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison
Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi
2004-04-01
The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.
NASA Astrophysics Data System (ADS)
Taghavi, F.; Owlad, E.; Ackerman, S. A.
2017-03-01
South-west Asia including the Middle East is one of the most prone regions to dust storm events. In recent years, there was an increase in the occurrence of these environmental and meteorological phenomena. Remote sensing could serve as an applicable method to detect and also characterise these events. In this study, two dust enhancement algorithms were used to investigate the behaviour of dust events using satellite data, compare with numerical model output and other satellite products and finally validate with in-situ measurements. The results show that the use of thermal infrared algorithm enhances dust more accurately. The aerosol optical depth from MODIS and output of a Dust Regional Atmospheric Model (DREAM8b) are applied for comparing the results. Ground-based observations of synoptic stations and sun photometers are used for validating the satellite products. To find the transport direction and the locations of the dust sources and the synoptic situations during these events, model outputs (HYSPLIT and NCEP/NCAR) are presented. Comparing the results with synoptic maps and the model outputs showed that using enhancement algorithms is a more reliable way than any other MODIS products or model outputs to enhance the dust.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.
A Model of Medical Countermeasures for Organophosphates
2015-10-01
Animal Data ................................................................. 51 6.2.1. Verifying AChE Activity ...17 Figure 4-3. Model Output for AChE Activity and Free/Stimulated Receptor Fraction with No OP Exposure...Figure 6-1. Sarin Model Output Compared to Individual AChE Activity in Acute Phase Following Tokyo Sarin Attack
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William Monford
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
Wood, William Monford
2018-02-07
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
NASA Astrophysics Data System (ADS)
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
NASA Astrophysics Data System (ADS)
Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu
2016-10-01
Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.
Metamodels for Ozone: Comparison of Three Estimation Techniques
A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
NASA Astrophysics Data System (ADS)
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.
Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H
2000-06-01
Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.
Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.
Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z
Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.
Ensemble modelling and structured decision-making to support Emergency Disease Management.
Webb, Colleen T; Ferrari, Matthew; Lindström, Tom; Carpenter, Tim; Dürr, Salome; Garner, Graeme; Jewell, Chris; Stevenson, Mark; Ward, Michael P; Werkman, Marleen; Backer, Jantien; Tildesley, Michael
2017-03-01
Epidemiological models in animal health are commonly used as decision-support tools to understand the impact of various control actions on infection spread in susceptible populations. Different models contain different assumptions and parameterizations, and policy decisions might be improved by considering outputs from multiple models. However, a transparent decision-support framework to integrate outputs from multiple models is nascent in epidemiology. Ensemble modelling and structured decision-making integrate the outputs of multiple models, compare policy actions and support policy decision-making. We briefly review the epidemiological application of ensemble modelling and structured decision-making and illustrate the potential of these methods using foot and mouth disease (FMD) models. In case study one, we apply structured decision-making to compare five possible control actions across three FMD models and show which control actions and outbreak costs are robustly supported and which are impacted by model uncertainty. In case study two, we develop a methodology for weighting the outputs of different models and show how different weighting schemes may impact the choice of control action. Using these case studies, we broadly illustrate the potential of ensemble modelling and structured decision-making in epidemiology to provide better information for decision-making and outline necessary development of these methods for their further application. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.
Real-time implementation of biofidelic SA1 model for tactile feedback.
Russell, A F; Armiger, R S; Vogelstein, R J; Bensmaia, S J; Etienne-Cummings, R
2009-01-01
In order for the functionality of an upper-limb prosthesis to approach that of a real limb it must be able to, accurately and intuitively, convey sensory feedback to the limb user. This paper presents results of the real-time implementation of a 'biofidelic' model that describes mechanotransduction in Slowly Adapting Type 1 (SA1) afferent fibers. The model accurately predicts the timing of action potentials for arbitrary force or displacement stimuli and its output can be used as stimulation times for peripheral nerve stimulation by a neuroprosthetic device. The model performance was verified by comparing the predicted action potential (or spike) outputs against measured spike outputs for different vibratory stimuli. Furthermore experiments were conducted to show that, like real SA1 fibers, the model's spike rate varies according to input pressure and that a periodic 'tapping' stimulus evokes periodic spike outputs.
NASA Astrophysics Data System (ADS)
Van Den Broeke, Matthew S.; Kalin, Andrew; Alavez, Jose Abraham Torres; Oglesby, Robert; Hu, Qi
2017-11-01
In climate modeling studies, there is a need to choose a suitable land surface model (LSM) while adhering to available resources. In this study, the viability of three LSM options (Community Land Model version 4.0 [CLM4.0], Noah-MP, and the five-layer thermal diffusion [Bucket] scheme) in the Weather Research and Forecasting model version 3.6 (WRF3.6) was examined for the warm season in a domain centered on the central USA. Model output was compared to Parameter-elevation Relationships on Independent Slopes Model (PRISM) data, a gridded observational dataset including mean monthly temperature and total monthly precipitation. Model output temperature, precipitation, latent heat (LH) flux, sensible heat (SH) flux, and soil water content (SWC) were compared to observations from sites in the Central and Southern Great Plains region. An overall warm bias was found in CLM4.0 and Noah-MP, with a cool bias of larger magnitude in the Bucket model. These three LSMs produced similar patterns of wet and dry biases. Model output of SWC and LH/SH fluxes were compared to observations, and did not show a consistent bias. Both sophisticated LSMs appear to be viable options for simulating the effects of land use change in the central USA.
Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.
Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun
2017-10-01
To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares
NASA Technical Reports Server (NTRS)
Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel
2014-01-01
The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
London, Michael; Larkum, Matthew E; Häusser, Michael
2008-11-01
Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
NASA Astrophysics Data System (ADS)
Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.
2009-04-01
Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.
Control design methods for floating wind turbines for optimal disturbance rejection
NASA Astrophysics Data System (ADS)
Lemmer, Frank; Schlipf, David; Cheng, Po Wen
2016-09-01
An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.
Predicting High-Power Performance in Professional Cyclists.
Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K
2017-03-01
To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.
Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro
2014-01-01
Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
NASA Astrophysics Data System (ADS)
Fang, W.; Quan, S. H.; Xie, C. J.; Ran, B.; Li, X. L.; Wang, L.; Jiao, Y. T.; Xu, T. W.
2017-05-01
The majority of the thermal energy released in an automotive internal combustion cycle is exhausted as waste heat through the tail pipe. This paper describes an automobile exhaust thermoelectric generator (AETEG), designed to recycle automobile waste heat. A model of the output characteristics of each thermoelectric device was established by testing their open circuit voltage and internal resistance, and combining the output characteristics. To better describe the relationship, the physical model was transformed into a topological model. The connection matrix was used to describe the relationship between any two thermoelectric devices in the topological structure. Different topological structures produced different power outputs; their output power was maximised by using an iterative algorithm to optimize the series-parallel electrical topology structure. The experimental results have shown that the output power of the optimal topology structure increases by 18.18% and 29.35% versus that of a pure in-series or parallel topology, respectively, and by 10.08% versus a manually defined structure (based on user experience). The thermoelectric conversion device increased energy efficiency by 40% when compared with a traditional car.
Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model
NASA Astrophysics Data System (ADS)
Fu, Li Fang; Meng, Jun; Liu, Ying
2015-12-01
Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.
NASA Astrophysics Data System (ADS)
Wang, Xianxun; Mei, Yadong
2017-04-01
Coordinative operation of hydro-wind-photovoltaic is the solution of mitigating the conflict of power generation and output fluctuation of new energy and conquering the bottleneck of new energy development. Due to the deficiencies of characterizing output fluctuation, depicting grid construction and disposal of power abandon, the research of coordinative mechanism is influenced. In this paper, the multi-object and multi-hierarchy model of coordinative operation of hydro-wind-photovoltaic is built with the aim of maximizing power generation and minimizing output fluctuation and the constraints of topotaxy of power grid and balanced disposal of power abandon. In the case study, the comparison of uncoordinative and coordinative operation is carried out with the perspectives of power generation, power abandon and output fluctuation. By comparison from power generation, power abandon and output fluctuation between separate operation and coordinative operation of multi-power, the coordinative mechanism is studied. Compared with running solely, coordinative operation of hydro-wind-photovoltaic can gain the compensation benefits. Peak-alternation operation reduces the power abandon significantly and maximizes resource utilization effectively by compensating regulation of hydropower. The Pareto frontier of power generation and output fluctuation is obtained through multiple-objective optimization. It clarifies the relationship of mutual influence between these two objects. When coordinative operation is taken, output fluctuation can be markedly reduced at the cost of a slight decline of power generation. The power abandon also drops sharply compared with operating separately. Applying multi-objective optimization method to optimize the coordinate operation, Pareto optimal solution set of power generation and output fluctuation is achieved.
Available pressure amplitude of linear compressor based on phasor triangle model
NASA Astrophysics Data System (ADS)
Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.
2017-12-01
The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.
A. Morani; D. Nowak; S. Hirabayashi; G. Guidolotti; M. Medori; V. Muzzini; S. Fares; G. Scarascia Mugnozza; C. Calfapietra
2014-01-01
Ozone flux estimates from the i-Tree model were compared with ozone flux measurements using the Eddy Covariance technique in a periurban Mediterranean forest near Rome (Castelporziano). For the first time i-Tree model outputs were compared with field measurements in relation to dry deposition estimates. Results showed generally a...
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
A spectral method for spatial downscaling | Science Inventory ...
Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
Life and reliability models for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Knorr, R. J.; Coy, J. J.
1982-01-01
Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.
Method and apparatus for loss of control inhibitor systems
NASA Technical Reports Server (NTRS)
A'Harrah, Ralph C. (Inventor)
2007-01-01
Active and adaptive systems and methods to prevent loss of control incidents by providing tactile feedback to a vehicle operator are disclosed. According to the present invention, an operator gives a control input to an inceptor. An inceptor sensor measures an inceptor input value of the control input. The inceptor input is used as an input to a Steady-State Inceptor Input/Effector Output Model that models the vehicle control system design. A desired effector output from the inceptor input is generated from the model. The desired effector output is compared to an actual effector output to get a distortion metric. A feedback force is generated as a function of the distortion metric. The feedback force is used as an input to a feedback force generator which generates a loss of control inhibitor system (LOCIS) force back to the inceptor. The LOCIS force is felt by the operator through the inceptor.
NASA Astrophysics Data System (ADS)
Li, Ping; Gao, Shiqiao; Cong, Binglong
2018-03-01
In this paper, performances of vibration energy harvester combined piezoelectric (PE) and electromagnetic (EM) mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it's found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.
Application of Wavelet Filters in an Evaluation of ...
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-01-01
Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-04-01
National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.
Perrichon, Prescilla; Grosell, Martin; Burggren, Warren W.
2017-01-01
Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model) applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output. PMID:28725199
Low Boom Configuration Analysis with FUN3D Adjoint Simulation Framework
NASA Technical Reports Server (NTRS)
Park, Michael A.
2011-01-01
Off-body pressure, forces, and moments for the Gulfstream Low Boom Model are computed with a Reynolds Averaged Navier Stokes solver coupled with the Spalart-Allmaras (SA) turbulence model. This is the first application of viscous output-based adaptation to reduce estimated discretization errors in off-body pressure for a wing body configuration. The output adaptation approach is compared to an a priori grid adaptation technique designed to resolve the signature on the centerline by stretching and aligning the grid to the freestream Mach angle. The output-based approach produced good predictions of centerline and off-centerline measurements. Eddy viscosity predicted by the SA turbulence model increased significantly with grid adaptation. Computed lift as a function of drag compares well with wind tunnel measurements for positive lift, but predicted lift, drag, and pitching moment as a function of angle of attack has significant differences from the measured data. The sensitivity of longitudinal forces and moment to grid refinement is much smaller than the differences between the computed and measured data.
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
MIMO system identification using frequency response data
NASA Technical Reports Server (NTRS)
Medina, Enrique A.; Irwin, R. D.; Mitchell, Jerrel R.; Bukley, Angelia P.
1992-01-01
A solution to the problem of obtaining a multi-input, multi-output statespace model of a system from its individual input/output frequency responses is presented. The Residue Identification Algorithm (RID) identifies the system poles from a transfer function model of the determinant of the frequency response data matrix. Next, the residue matrices of the modes are computed guaranteeing that each input/output frequency response is fitted in the least squares sense. Finally, a realization of the system is computed. Results of the application of RID to experimental frequency responses of a large space structure ground test facility are presented and compared to those obtained via the Eigensystem Realization Algorithm.
NASA Astrophysics Data System (ADS)
Elsayed, Ayman; Shabaan Khalil, Nabil
2017-10-01
The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.
Modeling nonlinearities in MEMS oscillators.
Agrawal, Deepak K; Woodhouse, Jim; Seshia, Ashwin A
2013-08-01
We present a mathematical model of a microelectromechanical system (MEMS) oscillator that integrates the nonlinearities of the MEMS resonator and the oscillator circuitry in a single numerical modeling environment. This is achieved by transforming the conventional nonlinear mechanical model into the electrical domain while simultaneously considering the prominent nonlinearities of the resonator. The proposed nonlinear electrical model is validated by comparing the simulated amplitude-frequency response with measurements on an open-loop electrically addressed flexural silicon MEMS resonator driven to large motional amplitudes. Next, the essential nonlinearities in the oscillator circuit are investigated and a mathematical model of a MEMS oscillator is proposed that integrates the nonlinearities of the resonator. The concept is illustrated for MEMS transimpedance-amplifier- based square-wave and sine-wave oscillators. Closed-form expressions of steady-state output power and output frequency are derived for both oscillator models and compared with experimental and simulation results, with a good match in the predicted trends in all three cases.
Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2015-01-01
In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.
Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M
2013-04-01
Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
LEO high voltage solar array arcing response model, continuation 5
NASA Technical Reports Server (NTRS)
Metz, Roger N.
1989-01-01
The modeling of the Debye Approximation electron sheaths in the edge and strip geometries was completed. Electrostatic potentials in these sheaths were compared to NASCAP/LEO solutions for similar geometries. Velocity fields, charge densities and particle fluxes to the biased surfaces were calculated for all cases. The major conclusion to be drawn from the comparisons of our Debye Approximation calculations with NASCAP-LEO output is that, where comparable biased structures can be defined and sufficient resolution obtained, these results are in general agreement. Numerical models for the Child-Langmuir, high-voltage electron sheaths in the edge and strip geometries were constructed. Electrostatic potentials were calculated for several cases in each of both geometries. Velocity fields and particle fluxes were calculated. The self-consistent solution process was carried through one cycle and output electrostatic potentials compared to NASCAP-type input potentials.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
NASA Astrophysics Data System (ADS)
Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.
2018-03-01
The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-02-27
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.
A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-01-01
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
A model for a continuous-wave iodine laser
NASA Technical Reports Server (NTRS)
Hwang, In H.; Tabibi, Bagher M.
1990-01-01
A model for a continuous-wave (CW) iodine laser has been developed and compared with the experimental results obtained from a solar-simulator-pumped CW iodine laser. The agreement between the calculated laser power output and the experimental results is generally good for various laser parameters even when the model includes only prominent rate coefficients. The flow velocity dependence of the output power shows that the CW iodine laser cannot be achieved with a flow velocity below 1 m/s for the present solar-simulator-pumped CW iodine laser system.
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven
2014-01-01
The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.
From Single-Cell Dynamics to Scaling Laws in Oncology
NASA Astrophysics Data System (ADS)
Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo
We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.
Energy: Economic activity and energy demand; link to energy flow. Example: France
NASA Astrophysics Data System (ADS)
1980-10-01
The data derived from the EXPLOR and EPOM, Energy Flow Optimization Model are described. The core of the EXPLOR model is a circular system of relations involving consumer's demand, producer's outputs, and market prices. The solution of this system of relations is obtained by successive iterations; the final output is a coherent system of economic accounts. The computer program for this transition is described. The work conducted by comparing different energy demand models is summarized. The procedure is illustrated by a numerical projection to 1980 and 1985 using the existing version of the EXPLOR France model.
Projecting climate change impacts on hydrology: the potential role of daily GCM output
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Hidalgo, H. G.; Das, T.; Dettinger, M. D.; Cayan, D.
2008-12-01
A primary challenge facing resource managers in accommodating climate change is determining the range and uncertainty in regional and local climate projections. This is especially important for assessing changes in extreme events, which will drive many of the more severe impacts of a changed climate. Since global climate models (GCMs) produce output at a spatial scale incompatible with local impact assessment, different techniques have evolved to downscale GCM output so locally important climate features are expressed in the projections. We compared skill and hydrologic projections using two statistical downscaling methods and a distributed hydrology model. The downscaling methods are the constructed analogues (CA) and the bias correction and spatial downscaling (BCSD). CA uses daily GCM output, and can thus capture GCM projections for changing extreme event occurrence, while BCSD uses monthly output and statistically generates historical daily sequences. We evaluate the hydrologic impacts projected using downscaled climate (from the NCEP/NCAR reanalysis as a surrogate GCM) for the late 20th century with both methods, comparing skill in projecting soil moisture, snow pack, and streamflow at key locations in the Western United States. We include an assessment of a new method for correcting for GCM biases in a hybrid method combining the most important characteristics of both methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas
Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Boerboom, L E; Kinney, T E; Olinger, G N; Hoffmann, R G
1993-10-01
Evaluation of patients with acute tricuspid insufficiency may include assessment of cardiac output by the thermodilution method. The accuracy of estimates of thermodilution-derived cardiac output in the presence of tricuspid insufficiency has been questioned. This study was designed to determine the validity of the thermodilution technique in a canine model of acute reversible tricuspid insufficiency. Cardiac output as measured by thermodilution and electromagnetic flowmeter was compared at two grades of regurgitation. The relationship between these two methods (thermodilution/electromagnetic) changed significantly from a regression slope of 1.01 +/- 0.18 (mean +/- standard deviation) during control conditions to a slope of 0.86 +/- 0.23 (p < 0.02) during severe regurgitation. No significant change was observed between control and mild regurgitation or between the initial control value and a control measurement repeated after tricuspid insufficiency was reversed at the termination of the study. This study shows that in a canine model of severe acute tricuspid regurgitation the thermodilution method underestimates cardiac output by an amount that is proportional to the level of cardiac output and to the grade of regurgitation.
Human Activity Recognition by Combining a Small Number of Classifiers.
Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin
2016-09-01
We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.
NASA Technical Reports Server (NTRS)
Johnson, P. R.; Bardusch, R. E.
1974-01-01
A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.
Innovative use of self-organising maps (SOMs) in model validation.
NASA Astrophysics Data System (ADS)
Jolly, Ben; McDonald, Adrian; Coggins, Jack
2016-04-01
We present an innovative combination of techniques for validation of numerical weather prediction (NWP) output against both observations and reanalyses using two classification schemes, demonstrated by a validation of the operational NWP 'AMPS' (the Antarctic Mesoscale Prediction System). Historically, model validation techniques have centred on case studies or statistics at various time scales (yearly/seasonal/monthly). Within the past decade the latter technique has been expanded by the addition of classification schemes in place of time scales, allowing more precise analysis. Classifications are typically generated for either the model or the observations, then used to create composites for both which are compared. Our method creates and trains a single self-organising map (SOM) on both the model output and observations, which is then used to classify both datasets using the same class definitions. In addition to the standard statistics on class composites, we compare the classifications themselves between the model and the observations. To add further context to the area studied, we use the same techniques to compare the SOM classifications with regimes developed for another study to great effect. The AMPS validation study compares model output against surface observations from SNOWWEB and existing University of Wisconsin-Madison Antarctic Automatic Weather Stations (AWS) during two months over the austral summer of 2014-15. Twelve SOM classes were defined in a '4 x 3' pattern, trained on both model output and observations of 2 m wind components, then used to classify both training datasets. Simple statistics (correlation, bias and normalised root-mean-square-difference) computed for SOM class composites showed that AMPS performed well during extreme weather events, but less well during lighter winds and poorly during the more changeable conditions between either extreme. Comparison of the classification time-series showed that, while correlations were lower during lighter wind periods, AMPS actually forecast the existence of those periods well suggesting that the correlations may be unfairly low. Further investigation showed poor temporal alignment during more changeable conditions, highlighting problems AMPS has around the exact timing of events. There was also a tendency for AMPS to over-predict certain wind flow patterns at the expense of others. In order to gain a larger scale perspective, we compared our mesoscale SOM classification time-series with synoptic scale regimes developed by another study using ERA-Interim reanalysis output and k-means clustering. There was good alignment between the regimes and the observations classifications (observations/regimes), highlighting the effect of synoptic scale forcing on the area. However, comparing the alignment between observations/regimes and AMPS/regimes showed that AMPS may have problems accurately resolving the strength and location of cyclones in the Ross Sea to the north of the target area.
1987-06-26
BUREAU OF STANDAR-S1963-A Nw BOM -ILE COPY -. 4eo .?3sa.9"-,,A WIN* MAT HEMATICAL SCIENCES _*INSTITUTE AD-A184 687 DTICS!ELECTE ANNOTATED COMPUTER OUTPUT...intoduction to the use of mixture models in clustering. Cornell University Biometrics Unit Technical Report BU-920-M and Mathematical Sciences Institute...mixture method and two comparable methods from SAS. Cornell University Biometrics Unit Technical Report BU-921-M and Mathematical Sciences Institute
Method and system for monitoring and displaying engine performance parameters
NASA Technical Reports Server (NTRS)
Abbott, Terence S. (Inventor); Person, Jr., Lee H. (Inventor)
1991-01-01
The invention is a method and system for monitoring and directly displaying the actual thrust produced by a jet aircraft engine under determined operating conditions and the available thrust and predicted (commanded) thrust of a functional model of an ideal engine under the same determined operating conditions. A first set of actual value output signals representative of a plurality of actual performance parameters of the engine under the determined operating conditions is generated and compared with a second set of predicted value output signals representative of the predicted value of corresponding performance parameters of a functional model of the engine under the determined operating conditions to produce a third set of difference value output signals within a range of normal, caution, or warning limit values. A thrust indicator displays when any one of the actual value output signals is in the warning range while shaping function means shape each of the respective difference output signals as each approaches the limit of the respective normal, caution, and warning range limits.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2017-09-01
To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.
NASA Astrophysics Data System (ADS)
Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong
2017-11-01
Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.
NASA Astrophysics Data System (ADS)
Perez, Marc J. R.
With extraordinary recent growth of the solar photovoltaic industry, it is paramount to address the biggest barrier to its high-penetration across global electrical grids: the inherent variability of the solar resource. This resource variability arises from largely unpredictable meteorological phenomena and from the predictable rotation of the earth around the sun and about its own axis. To achieve very high photovoltaic penetration, the imbalance between the variable supply of sunlight and demand must be alleviated. The research detailed herein consists of the development of a computational model which seeks to optimize the combination of 3 supply-side solutions to solar variability that minimizes the aggregate cost of electricity generated therefrom: Storage (where excess solar generation is stored when it exceeds demand for utilization when it does not meet demand), interconnection (where solar generation is spread across a large geographic area and electrically interconnected to smooth overall regional output) and smart curtailment (where solar capacity is oversized and excess generation is curtailed at key times to minimize the need for storage.). This model leverages a database created in the context of this doctoral work of satellite-derived photovoltaic output spanning 10 years at a daily interval for 64,000 unique geographic points across the globe. Underpinning the model's design and results, the database was used to further the understanding of solar resource variability at timescales greater than 1-day. It is shown that--as at shorter timescales--cloud/weather-induced solar variability decreases with geographic extent and that the geographic extent at which variability is mitigated increases with timescale and is modulated by the prevailing speed of clouds/weather systems. Unpredictable solar variability up to the timescale of 30 days is shown to be mitigated across a geographic extent of only 1500km if that geographic extent is oriented in a north/south bearing. Using technical and economic data reflecting today's real costs for solar generation technology, storage and electric transmission in combination with this model, we determined the minimum cost combination of these solutions to transform the variable output from solar plants into 3 distinct output profiles: A constant output equivalent to a baseload power plant, a well-defined seasonally-variable output with no weather-induced variability and a variable output but one that is 100% predictable on a multi-day ahead basis. In order to do this, over 14,000 model runs were performed by varying the desired output profile, the amount of energy curtailment, the penetration of solar energy and the geographic region across the continental United States. Despite the cost of supplementary electric transmission, geographic interconnection has the potential to reduce the levelized cost of electricity when meeting any of the studied output profiles by over 65% compared to when only storage is used. Energy curtailment, despite the cost of underutilizing solar energy capacity, has the potential to reduce the total cost of electricity when meeting any of the studied output profiles by over 75% compared to when only storage is used. The three variability mitigation strategies are thankfully not mutually exclusive. When combined at their ideal levels, each of the regions studied saw a reduction in cost of electricity of over 80% compared to when only energy storage is used to meet a specified output profile. When including current costs for solar generation, transmission and energy storage, an optimum configuration can conservatively provide guaranteed baseload power generation with solar across the entire continental United States (equivalent to a nuclear power plant with no down time) for less than 0.19 per kilowatt-hour. If solar is preferentially clustered in the southwest instead of evenly spread throughout the United States, and we adopt future expected costs for solar generation of 1 per watt, optimal model results show that meeting a 100% predictable output target with solar will cost no more than $0.08 per kilowatt-hour.
Modelled vs. reconstructed past fire dynamics - how can we compare?
NASA Astrophysics Data System (ADS)
Brücher, Tim; Brovkin, Victor; Kloster, Silvia; Marlon, Jennifer R.; Power, Mitch J.
2015-04-01
Fire is an important process that affects climate through changes in CO2 emissions, albedo, and aerosols (Ward et al. 2012). Fire-history reconstructions from charcoal accumulations in sediment indicate that biomass burning has increased since the Last Glacial Maximum (Power et al. 2008; Marlon et al. 2013). Recent comparisons with transient climate model output suggest that this increase in global ?re activity is linked primarily to variations in temperature and secondarily to variations in precipitation (Daniau et al. 2012). In this study, we discuss the best way to compare global ?re model output with charcoal records. Fire models generate quantitative output for burned area and fire-related emissions of CO2, whereas charcoal data indicate relative changes in biomass burning for specific regions and time periods only. However, models can be used to relate trends in charcoal data to trends in quantitative changes in burned area or fire carbon emissions. Charcoal records are often reported as Z-scores (Power et al. 2008). Since Z-scores are non-linear power transformations of charcoal influxes, we must evaluate if, for example, a two-fold increase in the standardized charcoal reconstruction corresponds to a 2- or 200-fold increase in the area burned. In our study we apply the Z-score metric to the model output. This allows us to test how well the model can quantitatively reproduce the charcoal-based reconstructions and how Z-score metrics affect the statistics of model output. The Global Charcoal Database (GCD version 2.5; www.gpwg.org/gpwgdb.html) is used to determine regional and global paleofire trends from 218 sedimentary charcoal records covering part or all of the last 8 ka BP. To retrieve regional and global composites of changes in fire activity over the Holocene the time series of Z-scores are linearly averaged to achieve regional composites. A coupled climate-carbon cycle model, CLIMBA (Brücher et al. 2014), is used for this study. It consists of the CLIMBER-2 Earth system model of intermediate complexity and the JSBACH land component of the Max Planck Institute Earth System Model. The fire algorithm in JSBACH assumes a constant annual lightning cycle as the sole fire ignition mechanism (Arora and Boer 2005). To eliminate data processing differences as a source for potential discrepancies, the processing of both reconstructed and modeled data, including e.g. normalisation with respect to a given base period and aggregation of time series was done in exactly the same way. Here, we compare the aggregated time series on a hemispheric and regional scale.
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Variable camber wing based on pneumatic artificial muscles
NASA Astrophysics Data System (ADS)
Yin, Weilong; Liu, Libo; Chen, Yijin; Leng, Jinsong
2009-07-01
As a novel bionic actuator, pneumatic artificial muscle has high power to weight ratio. In this paper, a variable camber wing with the pneumatic artificial muscle is developed. Firstly, the experimental setup to measure the static output force of pneumatic artificial muscle is designed. The relationship between the static output force and the air pressure is investigated. Experimental result shows the static output force of pneumatic artificial muscle decreases nonlinearly with increasing contraction ratio. Secondly, the finite element model of the variable camber wing is developed. Numerical results show that the tip displacement of the trailing-edge increases linearly with increasing external load and limited with the maximum static output force of pneumatic artificial muscles. Finally, the variable camber wing model is manufactured to validate the variable camber concept. Experimental result shows that the wing camber increases with increasing air pressure and that it compare very well with the FEM result.
MTCLIM: a mountain microclimate simulation model
Roger D. Hungerford; Ramakrishna R. Nemani; Steven W. Running; Joseph C. Coughlan
1989-01-01
A model for calculating daily microclimate conditions in mountainous terrain is presented. Daily air temperature, shortwave radiation, relative humidity, and precipitation are extrapolated form data measured at National Weather Service stations. The model equations are given and the paper describes how to execute the model. Model outputs are compared with observed data...
Application of Artificial Neural Network to Optical Fluid Analyzer
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Nishida, Katsuhiko
1994-04-01
A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).
The Flow Engine Framework: A Cognitive Model of Optimal Human Experience
Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie
2018-01-01
Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807
Probabilistic Evaluation of Competing Climate Models
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Chatterjee, S.; Heyman, M.; Cressie, N.
2017-12-01
A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. Here, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. We compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set, as an illustration.
Two models for identification and predicting behaviour of an induction motor system
NASA Astrophysics Data System (ADS)
Kuo, Chien-Hsun
2018-01-01
System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
Design of a Collapse-Mode CMUT With an Embossed Membrane for Improving Output Pressure.
Yu, Yuanyu; Pun, Sio Hang; Mak, Peng Un; Cheng, Ching-Hsiang; Wang, Jiujiang; Mak, Pui-In; Vai, Mang I
2016-06-01
Capacitive micromachined ultrasonic transducers (CMUTs) have emerged as a competitive alternative to piezoelectric ultrasonic transducers, especially in medical ultrasound imaging and therapeutic ultrasound applications, which require high output pressure. However, as compared with piezoelectric ultrasonic transducers, the output pressure capability of CMUTs remains to be improved. In this paper, a novel structure is proposed by forming an embossed vibrating membrane on a CMUT cell operating in the collapse mode to increase the maximum output pressure. By using a beam model in undamped conditions and finite-element analysis simulations, the proposed embossed structure showed improvement on the maximum output pressure of the CMUT cell when the embossed pattern was placed on the estimated location of the peak deflection. As compared with a uniform membrane CMUT cell worked in the collapse mode, the proposed CMUT cell can yield the maximum output pressure by 51.1% and 88.1% enhancement with a single embossed pattern made of Si3N4 and nickel, respectively. The maximum output pressures were improved by 34.9% (a single Si3N4 embossed pattern) and 46.7% (a single nickel embossed pattern) with the uniform membrane when the center frequencies of both original and embossed CMUT designs were similar.
Optimal pulse design for communication-oriented slow-light pulse detection.
Stenner, Michael D; Neifeld, Mark A
2008-01-21
We present techniques for designing pulses for linear slow-light delay systems which are optimal in the sense that they maximize the signal-to-noise ratio (SNR) and signal-to-noise-plus-interference ratio (SNIR) of the detected pulse energy. Given a communication model in which input pulses are created in a finite temporal window and output pulse energy in measured in a temporally-offset output window, the SNIR-optimal pulses achieve typical improvements of 10 dB compared to traditional pulse shapes for a given output window offset. Alternatively, for fixed SNR or SNIR, window offset (detection delay) can be increased by 0.3 times the window width. This approach also invites a communication-based model for delay and signal fidelity.
Synchronized Trajectories in a Climate "Supermodel"
NASA Astrophysics Data System (ADS)
Duane, Gregory; Schevenhoven, Francine; Selten, Frank
2017-04-01
Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.
Comparative Analysis of Vertebrate Diurnal/Circadian Transcriptomes
Boyle, Greg; Richter, Kerstin; Priest, Henry D.; Traver, David; Mockler, Todd C.; Chang, Jeffrey T.; Kay, Steve A.
2017-01-01
From photosynthetic bacteria to mammals, the circadian clock evolved to track diurnal rhythms and enable organisms to anticipate daily recurring changes such as temperature and light. It orchestrates a broad spectrum of physiology such as the sleep/wake and eating/fasting cycles. While we have made tremendous advances in our understanding of the molecular details of the circadian clock mechanism and how it is synchronized with the environment, we still have rudimentary knowledge regarding its connection to help regulate diurnal physiology. One potential reason is the sheer size of the output network. Diurnal/circadian transcriptomic studies are reporting that around 10% of the expressed genome is rhythmically controlled. Zebrafish is an important model system for the study of the core circadian mechanism in vertebrate. As Zebrafish share more than 70% of its genes with human, it could also be an additional model in addition to rodent for exploring the diurnal/circadian output with potential for translational relevance. Here we performed comparative diurnal/circadian transcriptome analysis with established mouse liver and other tissue datasets. First, by combining liver tissue sampling in a 48h time series, transcription profiling using oligonucleotide arrays and bioinformatics analysis, we profiled rhythmic transcripts and identified 2609 rhythmic genes. The comparative analysis revealed interesting features of the output network regarding number of rhythmic genes, proportion of tissue specific genes and the extent of transcription factor family expression. Undoubtedly, the Zebrafish model system will help identify new vertebrate outputs and their regulators and provides leads for further characterization of the diurnal cis-regulatory network. PMID:28076377
Reactive Power Pricing Model Considering the Randomness of Wind Power Output
NASA Astrophysics Data System (ADS)
Dai, Zhong; Wu, Zhou
2018-01-01
With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.
Common evolutionary trends underlie the four-bar linkage systems of sunfish and mantis shrimp.
Hu, Yinan; Nelson-Maney, Nathan; Anderson, Philip S L
2017-05-01
Comparative biomechanics offers an opportunity to explore the evolution of disparate biological systems that share common underlying mechanics. Four-bar linkage modeling has been applied to various biological systems such as fish jaws and crustacean appendages to explore the relationship between biomechanics and evolutionary diversification. Mechanical sensitivity states that the functional output of a mechanical system will show differential sensitivity to changes in specific morphological components. We document similar patterns of mechanical sensitivity in two disparate four-bar systems from different phyla: the opercular four-bar system in centrarchid fishes and the raptorial appendage of stomatopods. We built dynamic linkage models of 19 centrarchid and 36 stomatopod species and used phylogenetic generalized least squares regression (PGLS) to compare evolutionary shifts in linkage morphology and mechanical outputs derived from the models. In both systems, the kinematics of the four-bar mechanism show significant evolutionary correlation with the output link, while travel distance of the output arm is correlated with the coupler link. This common evolutionary pattern seen in both fish and crustacean taxa is a potential consequence of the mechanical principles underlying four-bar systems. Our results illustrate the potential influence of physical principles on morphological evolution across biological systems with different structures, behaviors, and ecologies. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Improved system integration for integrated gasification combined cycle (IGCC) systems.
Frey, H Christopher; Zhu, Yunhua
2006-03-01
Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.
NASA Astrophysics Data System (ADS)
Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.
2017-04-01
Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.
NASA Astrophysics Data System (ADS)
Enquist, C.
2012-12-01
Phenology, the study of seasonal life cycle events in plants and animals, is a well-recognized indicator of climate change impacts on people and nature. Models, experiments, and observational studies show changes in plant and animal phenology as a function of environmental change. Current research aims to improve our understanding of changes by enhancing existing models, analyzing observations, synthesizing previous research, and comparing outputs. Local to regional climatology is a critical driver of phenological variation of organisms across scales. Because plants respond to the cumulative effects of daily weather over an extended period, timing of life cycle events are effective integrators of climate data. One specific measure, leaf emergence, is particularly important because it often shows a strong response to temperature change, and is crucial for assessment of processes related to start and duration of the growing season. Schwartz et al. (2006) developed a suite of models (the "Spring Indices") linking plant development from historical data from leafing and flowering of cloned lilac and honeysuckle with basic climatic drivers to monitor changes related to the start of the spring growing season. These models can be generated at any location that has daily max-min temperature time series. The new version of these models is called the "Extended Spring Indices," or SI-x (Schwartz et al. in press). The SI-x model output (first leaf date and first bloom date) are produced similarly to the original models (SI-o), but do not incorporate accumulated chilling hours; rather energy accumulation starts for all stations on January 1. This change extends the locations SI model output can be generated into the sub-tropics, allowing full coverage of the conterminous USA. Both SI model versions are highly correlated, with mean bias and mean absolute differences around two days or less, and a similar bias and absolute errors when compared to cloned lilac data. To qualitatively test SI-x output and synthesize climate-linked regional variation in phenological events across the United States, we conducted a review of the recent phenology literature and assembled this information into 8 geographic regions. Additionally, we compared these outputs to analyses of species data found in the USA National Phenology Network database. We found that (1) all outputs showed advancement of spring onset across regions and taxa, despite great variability in species and site-level response, (2) many studies suggest that there may be evolutionary selection for organisms that track climatic changes, (3) although some organisms may benefit from lengthening growing seasons, there may be a cost, such as susceptibility to late frost, or "false springs," and (4) invasive organisms may have more capacity to track these changes than natives. More work is needed to (1) better understand precipitation and hydrology related cues and (2) understand the demographic consequences of trophic mismatch and effects on ecosystem processes and services. Next steps in this research include performing quantitative analyses to further explore if SI-x can be used to indicate and forecast changes in ecological and hydrological processes across geographic regions.
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
COMPARISON OF SPATIAL PATTERNS OF POLLUTANT DISTRIBUTION WITH CMAQ PREDICTIONS
To evaluate the Models-3/Community Multiscale Air Quality (CMAQ) modeling system in reproducing the spatial patterns of aerosol concentrations over the country on timescales of months and years, the spatial patterns of model output are compared with those derived from observation...
Empirical measurement and model validation of infrared spectra of contaminated surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay
2015-05-01
Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.
Burden of fibromyalgia and comparisons with osteoarthritis in the workforce.
Kleinman, Nathan; Harnett, James; Melkonian, Arthur; Lynch, Wendy; Kaplan-Machlis, Barbara; Silverman, Stuart L
2009-12-01
To calculate the fibromyalgia (FM) burden of illness (BOI) from the employer perspective and to compare annual prevalence, work output, absence, and health benefit costs of employees with FM versus osteoarthritis (OA). Retrospective regression model analysis comparing objective work output, total health benefit (health care, prescription drug, sick leave, disability, workers' compensation) costs, and absence days for FM, versus OA and NoFM cohorts, while controlling for differences in patient characteristics. FM prevalence was 0.73%; OA 0.90%. Total health benefit costs for FM were $8452 versus $11,253 (P < 0.0001) for OA and $4013 (P < 0.0001) for NoFM, with BOI = $4439. Total absence days were 16.8 versus 19.8 (P < 0.0001) and 6.4 (P < 0.0001), respectively. FM had significantly lower annual work output than NoFM (19.5%, P = 0.003) but comparable with OA. FM places a significant cost, absence, and productivity burden on employers.
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
NASA Astrophysics Data System (ADS)
Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari
2015-03-01
Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.
NASA Astrophysics Data System (ADS)
Altenau, E. H.; Pavelsky, T.; Andreadis, K.; Bates, P. D.; Neal, J. C.
2017-12-01
Multichannel rivers continue to be challenging features to quantify, especially at regional and global scales, which is problematic because accurate representations of such environments are needed to properly monitor the earth's water cycle as it adjusts to climate change. It has been demonstrated that higher-complexity, 2D models outperform lower-complexity, 1D models in simulating multichannel river hydraulics at regional scales due to the inclusion of the channel network's connectivity. However, new remote sensing measurements from the future Surface Water and Ocean Topography (SWOT) mission and it's airborne analog AirSWOT offer new observations that can be used to try and improve the lower-complexity, 1D models to achieve accuracies closer to the higher-complexity, 2D codes. Here, we use an Ensemble Kalman Filter (EnKF) to assimilate AirSWOT water surface elevation (WSE) measurements from a 2015 field campaign into a 1D hydrodynamic model along a 90 km reach of Tanana River, AK. This work is the first to test data assimilation methods using real SWOT-like data from AirSWOT. Additionally, synthetic SWOT observations of WSE are generated across the same study site using a fine-resolution 2D model and assimilated into the coarser-resolution 1D model. Lastly, we compare the abilities of AirSWOT and the synthetic-SWOT observations to improve spatial and temporal model outputs in WSEs. Results indicate 1D model outputs of spatially distributed WSEs improve as observational coverage increases, and improvements in temporal fluctuations in WSEs depend on the number of observations. Furthermore, results reveal that assimilation of AirSWOT observations produce greater error reductions in 1D model outputs compared to synthetic SWOT observations due to lower measurement errors. Both AirSWOT and the synthetic SWOT observations significantly lower spatial and temporal errors in 1D model outputs of WSEs.
NASA Astrophysics Data System (ADS)
Petit, J.-M.; Kavelaars, J. J.; Gladman, B.; Alexandersen, M.
2018-05-01
Comparing properties of discovered trans-Neptunian Objects (TNOs) with dynamical models is impossible due to the observational biases that exist in surveys. The OSSOS Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so the biased simulated objects can be directly compared with real discoveries.
NASA Astrophysics Data System (ADS)
Nair, Archana; Acharya, Nachiketa; Singh, Ankita; Mohanty, U. C.; Panda, T. C.
2013-11-01
In this study the predictability of northeast monsoon (Oct-Nov-Dec) rainfall over peninsular India by eight general circulation model (GCM) outputs was analyzed. These GCM outputs (forecasts for the whole season issued in September) were compared with high-resolution observed gridded rainfall data obtained from the India Meteorological Department for the period 1982-2010. Rainfall, interannual variability (IAV), correlation coefficients, and index of agreement were examined for the outputs of eight GCMs and compared with observation. It was found that the models are able to reproduce rainfall and IAV to different extents. The predictive power of GCMs was also judged by determining the signal-to-noise ratio and the external error variance; it was noted that the predictive power of the models was usually very low. To examine dominant modes of interannual variability, empirical orthogonal function (EOF) analysis was also conducted. EOF analysis of the models revealed they were capable of representing the observed precipitation variability to some extent. The teleconnection between the sea surface temperature (SST) and northeast monsoon rainfall was also investigated and results suggest that during OND the SST over the equatorial Indian Ocean, the Bay of Bengal, the central Pacific Ocean (over Nino3 region), and the north and south Atlantic Ocean enhances northeast monsoon rainfall. This observed phenomenon is only predicted by the CCM3v6 model.
NASA Astrophysics Data System (ADS)
Lawler, Samantha M.; Kavelaars, J. J.; Alexandersen, Mike; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Shankman, Cory
2018-05-01
All surveys include observational biases, which makes it impossible to directly compare properties of discovered trans-Neptunian Objects (TNOs) with dynamical models. However, by carefully keeping track of survey pointings on the sky, detection limits, tracking fractions, and rate cuts, the biases from a survey can be modelled in Survey Simulator software. A Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so that the biased simulated objects can be directly compared with real discoveries. This methodology has been used with great success in the Outer Solar System Origins Survey (OSSOS) and its predecessor surveys. In this chapter, we give four examples of ways to use the OSSOS Survey Simulator to gain knowledge about the true structure of the Kuiper Belt. We demonstrate how to statistically compare different dynamical model outputs with real TNO discoveries, how to quantify detection biases within a TNO population, how to measure intrinsic population sizes, and how to use upper limits from non-detections. We hope this will provide a framework for dynamical modellers to statistically test the validity of their models.
Lam, H K
2012-02-01
This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.
NASA Astrophysics Data System (ADS)
Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.
2018-03-01
In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.
Thermal resistance of etched-pillar vertical-cavity surface-emitting laser diodes
NASA Astrophysics Data System (ADS)
Wipiejewski, Torsten; Peters, Matthew G.; Young, D. Bruce; Thibeault, Brian; Fish, Gregory A.; Coldren, Larry A.
1996-03-01
We discuss our measurements on thermal impedance and thermal crosstalk of etched-pillar vertical-cavity lasers and laser arrays. The average thermal conductivity of AlAs-GaAs Bragg reflectors is estimated to be 0.28 W/(cmK) and 0.35W/(cmK) for the transverse and lateral direction, respectively. Lasers with a Au-plated heat spreading layer exhibit a 50% lower thermal impedance compared to standard etched-pillar devices resulting in a significant increase of maximum output power. For an unmounted laser of 64 micrometer diameter we obtain an improvement in output power from 20 mW to 42 mW. The experimental results are compared with a simple analytical model showing the importance of heat sinking for maximizing the output power of vertical-cavity lasers.
Geophysical, archaeological and historical evidence support a solar-output model for climate change
Perry, C.A.; Hsu, K.J.
2000-01-01
Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2(N) (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called 'little ice ages,' similar to the period anno Domini (A.D.) 1280-1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
Time does not cause forgetting in short-term serial recall.
Lewandowsky, Stephan; Duncan, Matthew; Brown, Gordon D A
2004-10-01
Time-based theories expect memory performance to decline as the delay between study and recall of an item increases. The assumption of time-based forgetting, central to many models of serial recall, underpins their key behaviors. Here we compare the predictions of time-based and event-based models by simulation and test them in two experiments using a novel manipulation of the delay between study and retrieval. Participants were trained, via corrective feedback, to recall at different speeds, thus varying total recall time from 6 to 10 sec. In the first experiment, participants used the keyboard to enter their responses but had to repeat a word (called the suppressor) aloud during recall to prevent rehearsal. In the second experiment, articulation was again required, but recall was verbal and was paced by the number of repetitions of the suppressor in between retrieval of items. In both experiments, serial position curves for all retrieval speeds overlapped, and output time had little or no effect. Comparative evaluation of a time-based and an event-based model confirmed that these results present a particular challenge to time-based approaches. We conclude that output interference, rather than output time, is critical in serial recall.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2017-12-01
WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.
ASSESSING A COMPUTER MODEL FOR PREDICTING HUMAN EXPOSURE TO PM2.5
This paper compares outputs of a model for predicting PM2.5 exposure with experimental data obtained from exposure studies of selected subpopulations. The exposure model is built on a WWW platform called pCNEM, "A PC Version of pNEM." Exposure models created by pCNEM are sim...
CMAQ-UCD (formerly known as CMAQ-AIM), is a fully dynamic, sectional aerosol model which has been coupled to the Community Multiscale Air Quality (CMAQ) host air quality model. Aerosol sulfate, nitrate, ammonium, sodium, and chloride model outputs are compared against MOUDI data...
Dynamics of nonlinear feedback control.
Snippe, H P; van Hateren, J H
2007-05-01
Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model
NASA Astrophysics Data System (ADS)
Washington, M. H.; Kumar, S.
2017-12-01
The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.
NASA Astrophysics Data System (ADS)
Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien
2015-04-01
In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.
Ollendorf, Daniel A; Pearson, Steven D
2014-01-01
Economic modeling has rarely been considered to be an essential component of healthcare policy-making in the USA, due to a lack of transparency in model design and assumptions, as well as political interests that equate examination of cost with unfair rationing. The Institute for Clinical and Economic Review has been involved in several efforts to bring economic modeling into public discussion of the comparative value of healthcare interventions, efforts that have evolved over time to suit the needs of multiple public forums. In this article, we review these initiatives and present a template that attempts to 'unpack' model output and present the major drivers of outcomes and cost. We conclude with a series of recommendations for effective presentation of economic models to US policy-makers.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Di; Xu, Lu; Liang, Xiao-Yan
2017-01-01
We theoretically analyzed output beam quality of broad bandwidth non-collinear optical parametric chirped pulse amplification (NOPCPA) in LiB3O5 (LBO) centered at 800 nm. With a three-dimensional numerical model, the influence of the pump intensity, pump and signal spatial modulations, and the walk-off effect on the OPCPA output beam quality are presented, together with conversion efficiency and the gain spectrum. The pump modulation is a dominant factor that affects the output beam quality. Comparatively, the influence of signal modulation is insignificant. For a low-energy system with small beam sizes, walk-off effect has to be considered. Pump modulation and walk-off effect lead to asymmetric output beam profile with increased modulation. A special pump modulation type is found to optimize output beam quality and efficiency. For a high-energy system with large beam sizes, the walk-off effect can be neglected, certain back conversion is beneficial to reduce the output modulation. A trade-off must be made between the output beam quality and the conversion efficiency, especially when the pump modulation is large since. A relatively high conversion efficiency and a low output modulation are both achievable by controlling the pump modulation and intensity.
The UK waste input-output table: Linking waste generation to the UK economy.
Salemdeeb, Ramy; Al-Tabbaa, Abir; Reynolds, Christian
2016-10-01
In order to achieve a circular economy, there must be a greater understanding of the links between economic activity and waste generation. This study introduces the first version of the UK waste input-output table that could be used to quantify both direct and indirect waste arisings across the supply chain. The proposed waste input-output table features 21 industrial sectors and 34 waste types and is for the 2010 time-period. Using the waste input-output table, the study results quantitatively confirm that sectors with a long supply chain (i.e. manufacturing and services sectors) have higher indirect waste generation rates compared with industrial primary sectors (e.g. mining and quarrying) and sectors with a shorter supply chain (e.g. construction). Results also reveal that the construction, mining and quarrying sectors have the highest waste generation rates, 742 and 694 tonne per £1m of final demand, respectively. Owing to the aggregated format of the first version of the waste input-output, the model does not address the relationship between waste generation and recycling activities. Therefore, an updated version of the waste input-output table is expected be developed considering this issue. Consequently, the expanded model would lead to a better understanding of waste and resource flows in the supply chain. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.
2012-12-01
General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.
Effects of laser phase fluctuations on squeezing in intracavity second-harmonic generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, T. A. B.; Anderson, T. B.; Walls, D. F.
1989-08-01
Excellent squeezing in intracavity second-harmonic generation has been predicted to occur on cavity resonance in the output intensity fluctuations. Cavity detunings cause laser phase noise to couple in and reduce the squeezing observable. Here we consider the effects of laser phase fluctuations on the output-squeezing spectrum. Laser phase noise is modeled as an Ornstein-Uhlenbeck (colored-noise) Gaussian stochastic process and its effects are compared with the white-noise limit. This indicates that the white-noise model may qualitatively overestimate the deleterious effects of laser fluctuations on sideband squeezing. We compare our results with the recently reported experiment of Pereira /ital et/ /ital al/.more » (Phys. Rev. A 38, 4931 (1988)) and present an analysis of the empty cavity for comparison.« less
Real-time simulation of an F110/STOVL turbofan engine
NASA Technical Reports Server (NTRS)
Drummond, Colin K.; Ouzts, Peter J.
1989-01-01
A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.
The Radiological Physics Center's standard dataset for small field size output factors.
Followill, David S; Kry, Stephen F; Qin, Lihong; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Aguirre, Jose Francisco; Ibbott, Geoffrey S
2012-08-08
Delivery of accurate intensity-modulated radiation therapy (IMRT) or stereotactic radiotherapy depends on a multitude of steps in the treatment delivery process. These steps range from imaging of the patient to dose calculation to machine delivery of the treatment plan. Within the treatment planning system's (TPS) dose calculation algorithm, various unique small field dosimetry parameters are essential, such as multileaf collimator modeling and field size dependence of the output. One of the largest challenges in this process is determining accurate small field size output factors. The Radiological Physics Center (RPC), as part of its mission to ensure that institutions deliver comparable and consistent radiation doses to their patients, conducts on-site dosimetry review visits to institutions. As a part of the on-site audit, the RPC measures the small field size output factors as might be used in IMRT treatments, and compares the resulting field size dependent output factors to values calculated by the institution's treatment planning system (TPS). The RPC has gathered multiple small field size output factor datasets for X-ray energies ranging from 6 to 18 MV from Varian, Siemens and Elekta linear accelerators. These datasets were measured at 10 cm depth and ranged from 10 × 10 cm(2) to 2 × 2 cm(2). The field sizes were defined by the MLC and for the Varian machines the secondary jaws were maintained at a 10 × 10 cm(2). The RPC measurements were made with a micro-ion chamber whose volume was small enough to gather a full ionization reading even for the 2 × 2 cm(2) field size. The RPC-measured output factors are tabulated and are reproducible with standard deviations (SD) ranging from 0.1% to 1.5%, while the institutions' calculated values had a much larger SD range, ranging up to 7.9% [corrected].The absolute average percent differences were greater for the 2 × 2 cm(2) than for the other field sizes. The RPC's measured small field output factors provide institutions with a standard dataset against which to compare their TPS calculated values. Any discrepancies noted between the standard dataset and calculated values should be investigated with careful measurements and with attention to the specific beam model.
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
NASA Astrophysics Data System (ADS)
Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.
2015-12-01
A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.
Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo
2015-02-01
The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV -1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV -1 . The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10 -5 μm -1 . The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10 -5 μm -1 . To the author's knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure. © 2015 American Association of Physicists in Medicine.
Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo
2015-02-01
The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV−1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV−1. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10−5μm−1. The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10−5μm−1. To the author’s knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Behavioral Implications of Piezoelectric Stack Actuators for Control of Micromanipulation
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Celanovic, Nikola
1996-01-01
A lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and in particular for microrobotic applications requiring accurate position and/or force control. In addition to describing the input-output dynamic behavior, the proposed model explains aspects of non-intuitive behavioral phenomena evinced by piezoelectric actuators, such as the input-output rate-independent hysteresis and the change in mechanical stiffness that results from altering electrical load. The authors incorporate a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data.
Description of the most current draft of the NONROAD model and how it version differs from prior versions. Nationwide model outputs are presented and compared for HC, CO, NOx, PM, SOx (SO2), and fuel consumption, for diesel and for sparkignition engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana, S.; Damiani, R.; vanDam, J.
As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, NREL tested a small horizontal axis wind turbine in the field at the National Wind Technology Center (NWTC). The test turbine was a 2.1-kW downwind machine mounted on an 18-meter multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the output of an aeroelasticmore » model of the turbine. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads. In this project, we compared fatigue loads as measured in the field, as predicted by the aeroelastic model, and as calculated using the simplified design equations.« less
Susan L. King
2003-01-01
The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...
Geophysical, archaeological, and historical evidence support a solar-output model for climate change
Perry, Charles A.; Hsu, Kenneth J.
2000-01-01
Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2N (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called “little ice ages,” similar to the period anno Domini (A.D.) 1280–1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum. PMID:11050181
A comparative study of linear and nonlinear MIMO feedback configurations
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Lin, C. A.
1984-01-01
In this paper, a comparison is conducted of several feedback configurations which have appeared in the literature (e.g. unity-feedback, model-reference, etc.). The linear time-invariant multi-input multi-output case is considered. For each configuration, the stability conditions are specified, the relation between achievable I/O maps and the achievable disturbance-to-output maps is examined, and the effect of various subsystem perturbations on the system performance is studied. In terms of these considerations, it is demonstrated that one of the configurations considered is better than all the others. The results are then extended to the nonlinear multi-input multi-output case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, Kenneth L.; Sturcken, Noah Andrew
Power controller includes an output terminal having an output voltage, at least one clock generator to generate a plurality of clock signals and a plurality of hardware phases. Each hardware phase is coupled to the at least one clock generator and the output terminal and includes a comparator. Each hardware phase is configured to receive a corresponding one of the plurality of clock signals and a reference voltage, combine the corresponding clock signal and the reference voltage to produce a reference input, generate a feedback voltage based on the output voltage, compare the reference input and the feedback voltage usingmore » the comparator and provide a comparator output to the output terminal, whereby the comparator output determines a duty cycle of the power controller. An integrated circuit including the power controller is also provided.« less
Arctic Ocean Model Intercomparison Using Sound Speed
NASA Astrophysics Data System (ADS)
Dukhovskoy, D. S.; Johnson, M. A.
2002-05-01
The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.
NASA Astrophysics Data System (ADS)
Lakshmi, V.; Fayne, J.; Bolten, J. D.
2016-12-01
We will use satellite data from TRMM (Tropical Rainfall Measurement Mission), AMSR (Advanced Microwave Scanning Radiometer), GRACE (Gravity Recovery and Climate Experiment) and MODIS (Moderate Resolution Spectroradiometer) and model output from NASA GLDAS (Global Land Data Assimilation System) to understand the linkages between hydrological variables. These hydrological variables include precipitation soil moisture vegetation index surface temperature ET and total water. We will present results for major river basins such as Amazon, Colorado, Mississippi, California, Danube, Nile, Congo, Yangtze Mekong, Murray-Darling and Ganga-Brahmaputra.The major floods and droughts in these watersheds will be mapped in time and space using the satellite data and model outputs mentioned above. We will analyze the various hydrological variables and conduct a synergistic study during times of flood and droughts. In order to compare hydrological variables between river basins with vastly different climate and land use we construct an index that is scaled by the climatology. This allows us to compare across different climate, topography, soils and land use regimes. The analysis shows that the hydrological variables derived from satellite data and NASA models clearly reflect the hydrological extremes. This is especially true when data from different sensors are analyzed together - for example rainfall data from TRMM and total water data from GRACE. Such analyses will help to construct prediction tools for water resources applications.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
NASA Astrophysics Data System (ADS)
Leuliette, E.; Nerem, S.; Jakub, T.
2006-07-01
Recen tly, multiple ensemble climate simulations h ave been produced for th e forthco ming Fourth A ssessment Report of the Intergovernmental Panel on Climate Change (IPCC). N early two dozen coupled ocean- atmo sphere models have contr ibuted output for a variety of climate scen arios. One scenar io, the climate of the 20th century exper imen t (20C3 M), produces model output that can be comp ared to th e long record of sea level provided by altimetry . Generally , the output from the 20C3M runs is used to initialize simulations of future climate scenar ios. Hence, v alidation of the 20 C3 M experiment resu lts is crucial to the goals of th e IPCC. We present compar isons of global mean sea level (G MSL) , global mean steric sea level change, and regional patterns of sea lev el chang e from these models to r esults from altimetry, tide gauge measurements, and reconstructions.
General Circulation Model Output for Forest Climate Change Research and Applications
Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi
1993-01-01
This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
NASA Astrophysics Data System (ADS)
Woodworth-Jefcoats, Phoebe A.; Polovina, Jeffrey J.; Howell, Evan A.; Blanchard, Julia L.
2015-11-01
We compare two ecosystem model projections of 21st century climate change and fishing impacts in the central North Pacific. Both a species-based and a size-based ecosystem modeling approach are examined. While both models project a decline in biomass across all sizes in response to climate change and a decline in large fish biomass in response to increased fishing mortality, the models vary significantly in their handling of climate and fishing scenarios. For example, based on the same climate forcing the species-based model projects a 15% decline in catch by the end of the century while the size-based model projects a 30% decline. Disparities in the models' output highlight the limitations of each approach by showing the influence model structure can have on model output. The aspects of bottom-up change to which each model is most sensitive appear linked to model structure, as does the propagation of interannual variability through the food web and the relative impact of combined top-down and bottom-up change. Incorporating integrated size- and species-based ecosystem modeling approaches into future ensemble studies may help separate the influence of model structure from robust projections of ecosystem change.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Regionalisation of statistical model outputs creating gridded data sets for Germany
NASA Astrophysics Data System (ADS)
Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas
2016-04-01
The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1989-01-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Astrophysics Data System (ADS)
Chao, Tien-Hsin
1989-06-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
NASA Technical Reports Server (NTRS)
Watson, Leela R.
2011-01-01
The 45th Weather Squadron Launch Weather Officers use the 12-km resolution North American Mesoscale model (MesoNAM) forecasts to support launch weather operations. In Phase I, the performance of the model at KSC/CCAFS was measured objectively by conducting a detailed statistical analysis of model output compared to observed values. The objective analysis compared the MesoNAM forecast winds, temperature, and dew point to the observed values from the sensors in the KSC/CCAFS wind tower network. In Phase II, the AMU modified the current tool by adding an additional 15 months of model output to the database and recalculating the verification statistics. The bias, standard deviation of bias, Root Mean Square Error, and Hypothesis test for bias were calculated to verify the performance of the model. The results indicated that the accuracy decreased as the forecast progressed, there was a diurnal signal in temperature with a cool bias during the late night and a warm bias during the afternoon, and there was a diurnal signal in dewpoint temperature with a low bias during the afternoon and a high bias during the late night.
Comparing estimates of EMEP MSC-W and UFORE models in air pollutant reduction by urban trees.
Guidolotti, Gabriele; Salviato, Michele; Calfapietra, Carlo
2016-10-01
There is a growing interest to identify and quantify the benefits provided by the presence of trees in urban environment in order to improve the environmental quality in cities. However, the evaluation and estimate of plant efficiency in removing atmospheric pollutants is rather complicated, because of the high number of factors involved and the difficulty of estimating the effect of the interactions between the different components. In this study, the EMEP MSC-W model was implemented to scale-down to tree-level and allows its application to an industrial-urban green area in Northern Italy. Moreover, the annual outputs were compared with the outputs of UFORE (nowadays i-Tree), a leading model for urban forest applications. Although, EMEP/MSC-W model and UFORE are semi-empirical models designed for different applications, the comparison, based on O3, NO2 and PM10 removal, showed a good agreement in the estimates and highlights how the down-scaling methodology presented in this study may have significant opportunities for further developments.
Comparative study of DPAL and XPAL systems and selection principal of parameters
NASA Astrophysics Data System (ADS)
Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui
2016-10-01
A theoretical model based on common pump structure is proposed to analyze the laser output characteristics of DPAL (Diode pumped alkali vapor laser) and XPAL (Exciplex pumped alkali laser) in this paper. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-XPAL systems with broadband pumping which is several times of pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell' s length, mixed gas concentration, pumped linewidth and output mirror reflectivity are analyzed for DPAL and XPAL systems basing on the kinetic model. The result shows a better performance in Cs-Ar XPAL laser with requirements of relatively high Ar concentration, high pumped intensity and high temperature. Comparatively, for Cs-DPAL laser, lower temperature and lower pumped intensity should be acquired. In addition, the predictions of selection principal of temperature and cell's length are also presented. The conception of the equivalent "alkali areal density" is proposed in this paper. It is defined as the product of the alkali density and cell's length. The result shows that the output characteristics of DPAL (or XPAL) system with the same alkali areal density but different temperatures turn out to be equal. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented. The detailed results of continuous-wave DPAL and XPAL performances as a function of pumped laser linewidth and mixed gas pressure are presented along with an analysis of influences of output coupler.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.
2014-08-15
Flattening filter free (FFF) beams have been adopted by many clinics and used for patient treatment. However, compared to the traditional flattened beams, we have limited knowledge of FFF beams. In this study, we successfully modeled the 6 MV FFF beam for Varian TrueBeam accelerator with the Monte Carlo (MC) method. Both the percentage depth dose and profiles match well to the Golden Beam Data (GBD) from Varian. MC simulations were then performed to predict the relative output factors. The in-water output ratio, Scp, was simulated in water phantom and data obtained agrees well with GBD. The in-air output ratio,more » Sc, was obtained by analyzing the phase space placed at isocenter, in air, and computing the ratio of water Kerma rates for different field sizes. The phantom scattering factor, Sp, can then be obtained from the traditional way of taking the ratio of Scp and Sc. We also simulated Sp using a recently proposed method based on only the primary beam dose delivery in water phantom. Because there is no concern of lateral electronic disequilibrium, this method is more suitable for small fields. The results from both methods agree well with each other. The flattened 6 MV beam was simulated and compared to 6 MV FFF. The comparison confirms that 6 MV FFF has less scattering from the Linac head and less phantom scattering contribution to the central axis dose, which will be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems.« less
NASA Astrophysics Data System (ADS)
Scolini, C.; Verbeke, C.; Gopalswamy, N.; Wijsen, N.; Poedts, S.; Mierla, M.; Rodriguez, L.; Pomoell, J.; Cramer, W. D.; Raeder, J.
2017-12-01
Coronal Mass Ejections (CMEs) and their interplanetary counterparts are considered to be the major space weather drivers. An accurate modelling of their onset and propagation up to 1 AU represents a key issue for more reliable space weather forecasts, and predictions about their actual geo-effectiveness can only be performed by coupling global heliospheric models to 3D models describing the terrestrial environment, e.g. magnetospheric and ionospheric codes in the first place. In this work we perform a Sun-to-Earth comprehensive analysis of the July 12, 2012 CME with the aim of testing the space weather predictive capabilities of the newly developed EUHFORIA heliospheric model integrated with the Gibson-Low (GL) flux rope model. In order to achieve this goal, we make use of a model chain approach by using EUHFORIA outputs at Earth as input parameters for the OpenGGCM magnetospheric model. We first reconstruct the CME kinematic parameters by means of single- and multi- spacecraft reconstruction methods based on coronagraphic and heliospheric CME observations. The magnetic field-related parameters of the flux rope are estimated based on imaging observations of the photospheric and low coronal source regions of the eruption. We then simulate the event with EUHFORIA, testing the effect of the different CME kinematic input parameters on simulation results at L1. We compare simulation outputs with in-situ measurements of the Interplanetary CME and we use them as input for the OpenGGCM model, so to investigate the magnetospheric response to solar perturbations. From simulation outputs we extract some global geomagnetic activity indexes and compare them with actual data records and with results obtained by the use of empirical relations. Finally, we discuss the forecasting capabilities of such kind of approach and its future improvements.
Developing Snow Model Forcing Data From WRF Model Output to Aid in Water Resource Forecasting
NASA Astrophysics Data System (ADS)
Havens, S.; Marks, D. G.; Watson, K. A.; Masarik, M.; Flores, A. N.; Kormos, P.; Hedrick, A. R.
2015-12-01
Traditional operational modeling tools used by water managers in the west are challenged by more frequently occurring uncharacteristic stream flow patterns caused by climate change. Water managers are now turning to new models based on the physical processes within a watershed to combat the increasing number of events that do not follow the historical patterns. The USDA-ARS has provided near real time snow water equivalent (SWE) maps using iSnobal since WY2012 for the Boise River Basin in southwest Idaho and since WY2013 for the Tuolumne Basin in California that feeds the Hetch Hetchy reservoir. The goal of these projects is to not only provide current snowpack estimates but to use the Weather Research and Forecasting (WRF) model to drive iSnobal in order to produce a forecasted stream flow when coupled to a hydrology model. The first step is to develop methods on how to create snow model forcing data from WRF outputs. Using a reanalysis 1km WRF dataset from WY2009 over the Boise River Basin, WRF model results like surface air temperature, relative humidity, wind, precipitation, cloud cover, and incoming long wave radiation must be downscaled for use in iSnobal. iSnobal results forced with WRF output are validated at point locations throughout the basin, as well as compared with iSnobal results forced with traditional weather station data. The presentation will explore the differences in forcing data derived from WRF outputs and weather stations and how this affects the snowpack distribution.
USDA-ARS?s Scientific Manuscript database
Models are often used to quantify how land use change and management impact soil organic carbon (SOC) stocks because it is often not feasible to use direct measuring methods. Because models are simplifications of reality, it is essential to compare model outputs with measured values to evaluate mode...
An Economic Analysis of Investment in the United States Shipbuilding Industry
2010-06-01
using U.S. Bureau of Economic Analysis (BEA) input/output data and the “Leontief inversion process” modeled at Carnegie Mellon University. This... modeled at Carnegie Mellon University. This sector was compared with five alternative investments. Second, the benefits of the shipyard-related...EIO-LCA Model ..................................39 2. Shipyard Direct Labor Trends .........................................................43 viii 3
USDA-ARS?s Scientific Manuscript database
Despite increased interest in watershed scale model simulations, literature lacks application of long-term data in fuzzy logic simulations and comparing outputs with physically based models such as APEX (Agricultural Policy Environmental eXtender). The objective of this study was to develop a fuzzy...
European Approaches to Quality Assurance: Models, Characteristics and Challenges.
ERIC Educational Resources Information Center
van Damme, D.
2000-01-01
Examines models, characteristics, and challenges of quality assurance in higher education in the Netherlands, Belgium, Germany, Denmark, France, Finland, Italy, and Spain. Notes a common move toward institutional autonomy and output oriented steering, and the absence of accreditation procedures comparable to those in Anglo-Saxon countries. Finds…
Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.
Boyd, Matthew T
2018-02-01
Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)
Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.
2015-01-01
These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.
NASA Astrophysics Data System (ADS)
Korbacz, A.; Brzeziński, A.; Thomas, M.
2008-04-01
We use new estimates of the global atmospheric and oceanic angular momenta (AAM, OAM) to study the influence on LOD/UT1. The AAM series was calculated from the output fields of the atmospheric general circulation model ERA-40 reanalysis. The OAM series is an outcome of global ocean model OMCT simulation driven by global fields of the atmospheric parameters from the ERA- 40 reanalysis. The excitation data cover the period between 1963 and 2001. Our calculations concern atmospheric and oceanic effects in LOD/UT1 over the periods between 20 days and decades. Results are compared to those derived from the alternative AAM/OAM data sets.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
Evaluating vaccination strategies to control foot-and-mouth disease: a model comparison study.
Roche, S E; Garner, M G; Sanson, R L; Cook, C; Birch, C; Backer, J A; Dube, C; Patyk, K A; Stevenson, M A; Yu, Z D; Rawdon, T G; Gauntlett, F
2015-04-01
Simulation models can offer valuable insights into the effectiveness of different control strategies and act as important decision support tools when comparing and evaluating outbreak scenarios and control strategies. An international modelling study was performed to compare a range of vaccination strategies in the control of foot-and-mouth disease (FMD). Modelling groups from five countries (Australia, New Zealand, USA, UK, The Netherlands) participated in the study. Vaccination is increasingly being recognized as a potentially important tool in the control of FMD, although there is considerable uncertainty as to how and when it should be used. We sought to compare model outputs and assess the effectiveness of different vaccination strategies in the control of FMD. Using a standardized outbreak scenario based on data from an FMD exercise in the UK in 2010, the study showed general agreement between respective models in terms of the effectiveness of vaccination. Under the scenario assumptions, all models demonstrated that vaccination with 'stamping-out' of infected premises led to a significant reduction in predicted epidemic size and duration compared to the 'stamping-out' strategy alone. For all models there were advantages in vaccinating cattle-only rather than all species, using 3-km vaccination rings immediately around infected premises, and starting vaccination earlier in the control programme. This study has shown that certain vaccination strategies are robust even to substantial differences in model configurations. This result should increase end-user confidence in conclusions drawn from model outputs. These results can be used to support and develop effective policies for FMD control.
A network-base analysis of CMIP5 "historical" experiments
NASA Astrophysics Data System (ADS)
Bracco, A.; Foudalis, I.; Dovrolis, C.
2012-12-01
In computer science, "complex network analysis" refers to a set of metrics, modeling tools and algorithms commonly used in the study of complex nonlinear dynamical systems. Its main premise is that the underlying topology or network structure of a system has a strong impact on its dynamics and evolution. By allowing to investigate local and non-local statistical interaction, network analysis provides a powerful, but only marginally explored, framework to validate climate models and investigate teleconnections, assessing their strength, range, and impacts on the climate system. In this work we propose a new, fast, robust and scalable methodology to examine, quantify, and visualize climate sensitivity, while constraining general circulation models (GCMs) outputs with observations. The goal of our novel approach is to uncover relations in the climate system that are not (or not fully) captured by more traditional methodologies used in climate science and often adopted from nonlinear dynamical systems analysis, and to explain known climate phenomena in terms of the network structure or its metrics. Our methodology is based on a solid theoretical framework and employs mathematical and statistical tools, exploited only tentatively in climate research so far. Suitably adapted to the climate problem, these tools can assist in visualizing the trade-offs in representing global links and teleconnections among different data sets. Here we present the methodology, and compare network properties for different reanalysis data sets and a suite of CMIP5 coupled GCM outputs. With an extensive model intercomparison in terms of the climate network that each model leads to, we quantify how each model reproduces major teleconnections, rank model performances, and identify common or specific errors in comparing model outputs and observations.
Paleoclimate reconstruction through Bayesian data assimilation
NASA Astrophysics Data System (ADS)
Fer, I.; Raiho, A.; Rollinson, C.; Dietze, M.
2017-12-01
Methods of paleoclimate reconstruction from plant-based proxy data rely on assumptions of static vegetation-climate link which is often established between modern climate and vegetation. This approach might result in biased climate constructions as it does not account for vegetation dynamics. Predictive tools such as process-based dynamic vegetation models (DVM) and their Bayesian inversion could be used to construct the link between plant-based proxy data and palaeoclimate more realistically. In other words, given the proxy data, it is possible to infer the climate that could result in that particular vegetation composition, by comparing the DVM outputs to the proxy data within a Bayesian state data assimilation framework. In this study, using fossil pollen data from five sites across the northern hardwood region of the US, we assimilate fractional composition and aboveground biomass into dynamic vegetation models, LINKAGES, LPJ-GUESS and ED2. To do this, starting from 4 Global Climate Model outputs, we generate an ensemble of downscaled meteorological drivers for the period 850-2015. Then, as a first pass, we weigh these ensembles based on their fidelity with independent paleoclimate proxies. Next, we run the models with this ensemble of drivers, and comparing the ensemble model output to the vegetation data, adjust the model state estimates towards the data. At each iteration, we also reweight the climate values that make the model and data consistent, producing a reconstructed climate time-series dataset. We validated the method using present-day datasets, as well as a synthetic dataset, and then assessed the consistency of results across ecosystem models. Our method allows the combination of multiple data types to reconstruct the paleoclimate, with associated uncertainty estimates, based on ecophysiological and ecological processes rather than phenomenological correlations with proxy data.
Colors of attraction: Modeling insect flight to light behavior.
Donners, Maurice; van Grunsven, Roy H A; Groenendijk, Dick; van Langevelde, Frank; Bikker, Jan Willem; Longcore, Travis; Veenendaal, Elmar
2018-06-26
Light sources attract nocturnal flying insects, but some lamps attract more insects than others. The relation between the properties of a light source and the number of attracted insects is, however, poorly understood. We developed a model to quantify the attractiveness of light sources based on the spectral output. This model is fitted using data from field experiments that compare a large number of different light sources. We validated this model using two additional datasets, one for all insects and one excluding the numerous Diptera. Our model facilitates the development and application of light sources that attract fewer insects without the need for extensive field tests and it can be used to correct for spectral composition when formulating hypotheses on the ecological impact of artificial light. In addition, we present a tool allowing the conversion of the spectral output of light sources to their relative insect attraction based on this model. © 2018 Wiley Periodicals, Inc.
A two-stage DEA approach for environmental efficiency measurement.
Song, Malin; Wang, Shuhong; Liu, Wei
2014-05-01
The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.
NASA Astrophysics Data System (ADS)
Van Pelt, S.; Kohfeld, K. E.; Allen, D. M.
2015-12-01
The decline of the Mayan Civilization is thought to be caused by a series of droughts that affected the Yucatan Peninsula during the Terminal Classic Period (T.C.P.) 800-1000 AD. The goals of this study are two-fold: (a) to compare paleo-model simulations of the past 1000 years with a compilation of multiple proxies of changes in moisture conditions for the Yucatan Peninsula during the T.C.P. and (b) to use this comparison to inform the modeling of groundwater recharge in this region, with a focus on generating the daily climate data series needed as input to a groundwater recharge model. To achieve the first objective, we compiled a dataset of 5 proxies from seven locations across the Yucatan Peninsula, to be compared with temperature and precipitation output from the Community Climate System Model Version 4 (CCSM4), which is part of the Coupled Model Intercomparison Project Phase 5 (CMIP5) past1000 experiment. The proxy dataset includes oxygen isotopes from speleothems and gastropod/ostrocod shells (11 records); and sediment density, mineralogy, and magnetic susceptibility records from lake sediment cores (3 records). The proxy dataset is supplemented by a compilation of reconstructed temperatures using pollen and tree ring records for North America (archived in the PAGES2k global network data). Our preliminary analysis suggests that many of these datasets show evidence of drier and warmer climate on the Yucatan Peninsula around the T.C.P. when compared to modern conditions, although the amplitude and timing of individual warming and drying events varies between sites. This comparison with modeled output will ultimately be used to inform backward shift factors that will be input to a stochastic weather generator. These shift factors will be based on monthly changes in temperature and precipitation and applied to a modern daily climate time series for the Yucatan Peninsula to produce a daily climate time series for the T.C.P.
ISS Plasma Environment: Status of CCMC Products for ISS Mission Ops
NASA Technical Reports Server (NTRS)
Minow, Joseph
2010-01-01
ISS Program currently using FPMU Ne, Te in-situ measurements to support operations and anomaly investigations. Working to acquire alternative data sources if FPMU is not available. Work is progressing on CCMC tools for low Earth orbit ionosphere characterization. Validation against FPMU data required before model output can be used for ISS operational support. MSFC plans to continue comparing CTIP output during FPMU campaigns. Results to date have been useful in identifying ionospheric origins of high latitude charging environments.
ARM Cloud Radar Simulator Package for Global Climate Models Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuying; Xie, Shaocheng
It has been challenging to directly compare U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ground-based cloud radar measurements with climate model output because of limitations or features of the observing processes and the spatial gap between model and the single-point measurements. To facilitate the use of ARM radar data in numerical models, an ARM cloud radar simulator was developed to converts model data into pseudo-ARM cloud radar observations that mimic the instrument view of a narrow atmospheric column (as compared to a large global climate model [GCM] grid-cell), thus allowing meaningful comparison between model outputmore » and ARM cloud observations. The ARM cloud radar simulator value-added product (VAP) was developed based on the CloudSat simulator contained in the community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP) (Bodas-Salcedo et al., 2011), which has been widely used in climate model evaluation with satellite data (Klein et al., 2013, Zhang et al., 2010). The essential part of the CloudSat simulator is the QuickBeam radar simulator that is used to produce CloudSat-like radar reflectivity, but is capable of simulating reflectivity for other radars (Marchand et al., 2009; Haynes et al., 2007). Adapting QuickBeam to the ARM cloud radar simulator within COSP required two primary changes: one was to set the frequency to 35 GHz for the ARM Ka-band cloud radar, as opposed to 94 GHz used for the CloudSat W-band radar, and the second was to invert the view from the ground to space so as to attenuate the beam correctly. In addition, the ARM cloud radar simulator uses a finer vertical resolution (100 m compared to 500 m for CloudSat) to resolve the more detailed structure of clouds captured by the ARM radars. The ARM simulator has been developed following the COSP workflow (Figure 1) and using the capabilities available in COSP wherever possible. The ARM simulator is written in Fortran 90, just as is the COSP. It is incorporated into COSP to facilitate use by the climate modeling community. In order to evaluate simulator output, the observational counterpart of the simulator output, radar reflectivity-height histograms (CFAD) is also generated from the ARM observations. This report includes an overview of the ARM cloud radar simulator VAP and the required simulator-oriented ARM radar data product (radarCFAD) for validating simulator output, as well as a user guide for operating the ARM radar simulator VAP.« less
Numerical modeling of the hydrodynamics of the Northeastern Corridor Reserve in Puerto Rico
NASA Astrophysics Data System (ADS)
Salgado-Domínguez, G.; Canals, M.
2016-02-01
To develop an appropriate management plan for the marine section of the Northeast Corridor Reserve (NECR) of Puerto Rico it is necessary to understand the hydrodynamic connectivity between the different regions within the NECR. The USACE CMS Flow model has been implemented for the NECR using very high resolution telescoping grids, with a special focus on the complex coral reef areas of the La Cordillera Reefs Natural Reserve, established by the Department of Natural and Environmental Resources of Puerto Rico. To ensure correct application of boundary conditions and realistic representation of the tidal elevation within the NECR, water elevation model output data was compared with the Fajardo tide gauge; while the ocean current model output was compared with the depth-integrated observed currents at the CariCOOS Vieques Sound buoy. Comparison of model performance with buoy and tide gauge data has shown good agreement, however, further model tuning is necessary to optimize model performance. Further improvement of our models depends largely on obtaining more accurate boundary conditions as well as better wind forcing. We are currently implementing the USACE Particle Tracking Model (PTM) to characterize particle dispersion within the NECR. In the long-term, full 3D hydrodynamic models including riverine forcing hold the key to a complete understanding of larvae and sediment dispersion within the NECR.
Efficiency measurement and the operationalization of hospital production.
Magnussen, J
1996-04-01
To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. Norwegian hospitals over the three-year period 1989-1991. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. Input and output data are collected by the Norwegian Central Bureau of Statistics. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile.
Validation project. This report describes the procedure used to generate the noise models output dataset , and then it compares that dataset to the...benchmark, the Engineer Research and Development Centers Long-Range Sound Propagation dataset . It was found that the models consistently underpredict the
O'Neill, Liam; Dexter, Franklin
2005-11-01
We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.
NASA Astrophysics Data System (ADS)
Yang, Jing; Zhang, Da-hai; Chen, Ying; Liang, Hui; Tan, Ming; Li, Wei; Ma, Xian-dong
2017-10-01
A novel floating pendulum wave energy converter (WEC) with the ability of tide adaptation is designed and presented in this paper. Aiming to a high efficiency, the buoy's hydrodynamic shape is optimized by enumeration and comparison. Furthermore, in order to keep the buoy's well-designed leading edge always facing the incoming wave straightly, a novel transmission mechanism is then adopted, which is called the tidal adaptation mechanism in this paper. Time domain numerical models of a floating pendulum WEC with or without tide adaptation mechanism are built to compare their performance on various water levels. When comparing these two WECs in terms of their average output based on the linear passive control strategy, the output power of WEC with the tide adaptation mechanism is much steadier with the change of the water level and always larger than that without the tide adaptation mechanism.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooker, A.; Gonder, J.; Wang, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles tomore » provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).« less
CPAP Devices for Emergency Prehospital Use: A Bench Study.
Brusasco, Claudia; Corradi, Francesco; De Ferrari, Alessandra; Ball, Lorenzo; Kacmarek, Robert M; Pelosi, Paolo
2015-12-01
CPAP is frequently used in prehospital and emergency settings. An air-flow output minimum of 60 L/min and a constant positive pressure are 2 important features for a successful CPAP device. Unlike hospital CPAP devices, which require electricity, CPAP devices for ambulance use need only an oxygen source to function. The aim of the study was to evaluate and compare on a bench model the performance of 3 orofacial mask devices (Ventumask, EasyVent, and Boussignac CPAP system) and 2 helmets (Ventukit and EVE Coulisse) used to apply CPAP in the prehospital setting. A static test evaluated air-flow output, positive pressure applied, and FIO2 delivered by each device. A dynamic test assessed airway pressure stability during simulated ventilation. Efficiency of devices was compared based on oxygen flow needed to generate a minimum air flow of 60 L/min at each CPAP setting. The EasyVent and EVE Coulisse devices delivered significantly higher mean air-flow outputs compared with the Ventumask and Ventukit under all CPAP conditions tested. The Boussignac CPAP system never reached an air-flow output of 60 L/min. The EasyVent had significantly lower pressure excursion than the Ventumask at all CPAP levels, and the EVE Coulisse had lower pressure excursion than the Ventukit at 5, 15, and 20 cm H2O, whereas at 10 cm H2O, no significant difference was observed between the 2 devices. Estimated oxygen consumption was lower for the EasyVent and EVE Coulisse compared with the Ventumask and Ventukit. Air-flow output, pressure applied, FIO2 delivered, device oxygen consumption, and ability to maintain air flow at 60 L/min differed significantly among the CPAP devices tested. Only the EasyVent and EVE Coulisse achieved the required minimum level of air-flow output needed to ensure an effective therapy under all CPAP conditions. Copyright © 2015 by Daedalus Enterprises.
Diffracted field distributions from the HE11 mode in a hollow optical fibre for an atomic funnel
NASA Astrophysics Data System (ADS)
Ni, Yun; Liu, Nanchun; Yin, Jianping
2003-06-01
The diffracted near field distribution from an LP01 mode in a hollow optical fibre was recently calculated using a scalar model based on the weakly waveguiding approximation (Yoo et al 1999 J. Opt. B: Quantum Semiclass. Opt. 1 364). It showed a dominant Gaussian-like distribution with an increased axial intensity in the central region (not a doughnut-like distribution), so the diffracted output beam from the hollow fibre cannot be used to form an atomic funnel. Using exact solutions of the Maxwell equations based on a vector model, however, we calculate the electric field and intensity distributions of the HE11 mode in the same hollow fibre and study the diffracted near- and far-field distributions of the HE11-mode output beam under the Fresnel approximation. We analyse and compare the differences between the output beams from the HE11 and LP01 modes. Our study shows that both the near- and far-field intensity distributions of the HE11-mode output beam are doughnut-like and can be used to form a simple atomic funnel. However, it is not suitable to use the weakly waveguiding approximation to calculate the diffracted near-field distribution of the hollow fibre due to the greater refractive-index difference between the hollow region (n0 = 1) and the core (n1 = 1.45 or 1.5). Finally, the 3D intensity distribution of the HE11-mode output beam is modelled and the corresponding optical potentials for cold atoms are calculated. Some potential applications of the HE11-mode output beam in an atomic guide and funnel are briefly discussed.
NASA Astrophysics Data System (ADS)
Srinivas, P. G.; Spencer, E. A.; Vadepu, S. K.; Horton, W., Jr.
2017-12-01
We compare satellite observations of substorm electric fields and magnetic fields to the output of a low dimensional nonlinear physics model of the nightside magnetosphere called WINDMI. The electric and magnetic field satellite data are used to calculate the E X B drift, which is one of the intermediate variables of the WINDMI model. The model uses solar wind and IMF measurements from the ACE spacecraft as input into a system of 8 nonlinear ordinary differential equations. The state variables of the differential equations represent the energy stored in the geomagnetic tail, central plasma sheet, ring current and field aligned currents. The output from the model is the ground based geomagnetic westward auroral electrojet (AL) index, and the Dst index.Using ACE solar wind data, IMF data and SuperMAG identification of substorm onset times up to December 2015, we constrain the WINDMI model to trigger substorm events, and compare the model intermediate variables to THEMIS and GEOTAIL satellite data in the magnetotail. By forcing the model to be consistent with satellite electric and magnetic field observations, we are able to track the magnetotail energy dynamics, the field aligned current contributions, energy injections into the ring current, and ensure that they are within allowable limts. In addition we are able to constrain the physical parameters of the model, in particular the lobe inductance, the plasma sheet capacitance, and the resistive and conductive parameters in the plasma sheet and ionosphere.
Berlin, David A; Peprah-Mensah, Harrison; Manoach, Seth; Heerdt, Paul M
2017-02-01
The study tests the hypothesis that noninvasive cardiac output monitoring based upon bioreactance (Cheetah Medical, Portland, OR) has acceptable agreement with intermittent bolus thermodilution over a wide range of cardiac output in an adult porcine model of hemorrhagic shock and resuscitation. Prospective laboratory animal investigation. Preclinical university laboratory. Eight ~ 50 kg Yorkshire swine with a femoral artery catheter for blood pressure measurement and a pulmonary artery catheter for bolus thermodilution. With the pigs anesthetized and mechanically ventilated, 40 mL/kg of blood was removed yielding marked hypotension and a rise in plasma lactate. After 60 minutes, pigs were resuscitated with shed blood and crystalloid. Noninvasive cardiac output monitoring and intermittent thermodilution cardiac output were simultaneously measured at nine time points spanning baseline, hemorrhage, and resuscitation. Simultaneous noninvasive cardiac output monitoring and thermodilution measurements of cardiac output were compared by Bland-Altman analysis. A plot was constructed using the difference of each paired measurement expressed as a percentage of the mean of the pair plotted against the mean of the pair. Percent bias was used to scale the differences in the measurements for the magnitude of the cardiac output. Method concordance was assessed from a four-quadrant plot with a 15% zone of exclusion. Overall, noninvasive cardiac output monitoring percent bias was 1.47% (95% CI, -2.5 to 5.4) with limits of agreement of upper equal to 33.4% (95% CI, 26.5-40.2) and lower equal to -30.4% (95% CI, -37.3 to -23.6). Trending analysis demonstrated a 97% concordance between noninvasive cardiac output monitoring and thermodilution cardiac output. Over the wide range of cardiac output produced by hemorrhage and resuscitation in large pigs, noninvasive cardiac output monitoring has acceptable agreement with thermodilution cardiac output.
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Duan, Yuwen; McKay, Aaron; Jovanovic, Nemanja; Ams, Martin; Marshall, Graham D; Steel, M J; Withford, Michael J
2013-07-29
We present a model for a Yb-doped distributed Bragg reflector (DBR) waveguide laser fabricated in phosphate glass using the femtosecond laser direct-write technique. The model gives emphasis to transverse integrals to investigate the energy distribution in a homogenously doped glass, which is an important feature of femtosecond laser inscribed waveguide lasers (WGLs). The model was validated with experiments comparing a DBR WGL and a fiber laser, and then used to study the influence of distributed rare earth dopants on the performance of such lasers. Approximately 15% of the pump power was absorbed by the doped "cladding" in the femtosecond laser inscribed Yb doped WGL case with the length of 9.8 mm. Finally, we used the model to determine the parameters that optimize the laser output such as the waveguide length, output coupler reflectivity and refractive index contrast.
Data-based virtual unmodeled dynamics driven multivariable nonlinear adaptive switching control.
Chai, Tianyou; Zhang, Yajun; Wang, Hong; Su, Chun-Yi; Sun, Jing
2011-12-01
For a complex industrial system, its multivariable and nonlinear nature generally make it very difficult, if not impossible, to obtain an accurate model, especially when the model structure is unknown. The control of this class of complex systems is difficult to handle by the traditional controller designs around their operating points. This paper, however, explores the concepts of controller-driven model and virtual unmodeled dynamics to propose a new design framework. The design consists of two controllers with distinct functions. First, using input and output data, a self-tuning controller is constructed based on a linear controller-driven model. Then the output signals of the controller-driven model are compared with the true outputs of the system to produce so-called virtual unmodeled dynamics. Based on the compensator of the virtual unmodeled dynamics, the second controller based on a nonlinear controller-driven model is proposed. Those two controllers are integrated by an adaptive switching control algorithm to take advantage of their complementary features: one offers stabilization function and another provides improved performance. The conditions on the stability and convergence of the closed-loop system are analyzed. Both simulation and experimental tests on a heavily coupled nonlinear twin-tank system are carried out to confirm the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zhu, X.; Wen, X.; Zheng, Z.
2017-12-01
For better prediction and understanding of land-atmospheric interaction, in-situ observed meteorological data acquired from the China Meteorological Administration (CMA) were assimilated in the Weather Research and Forecasting (WRF) model and the monthly Green Vegetation Coverage (GVF) data, which was calculated using the Normalized Difference Vegetation Index (NDVI) of the Earth Observing System Moderate-Resolution Imaging Spectroradiometer (EOS-MODIS) and Digital Elevation Model (DEM) data of the Shuttle Radar Topography Mission (SRTM) system. Furthermore, the WRF model produced a High-Resolution Assimilation Dataset of the water-energy cycle in China (HRADC). This dataset has a horizontal resolution of 25 km for near surface meteorological data, such as air temperature, humidity, wind vectors and pressure (19 levels); soil temperature and moisture (four levels); surface temperature; downward/upward short/long radiation; 3-h latent heat flux; sensible heat flux; and ground heat flux. In this study, we 1) briefly introduce the cycling 3D-Var assimilation method and 2) compare results of meteorological elements, such as 2 m temperature and precipitation generated by the HRADC with the gridded observation data from CMA, and surface temperature and specific humidity with Global LandData Assimilation System (GLDAS) output data from the National Aeronautics and Space Administration (NASA). We found that the satellite-derived GVF from MODIS increased over southeast China compared with the default model over the whole year. The simulated results of soil temperature, net radiation and surface energy flux from the HRADC are improved compared with the control simulation and are close to GLDAS outputs. The values of net radiation from HRADC are higher than the GLDAS outputs, and the differences in the simulations are large in the east region but are smaller in northwest China and on the Qinghai-Tibet Plateau. The spatial distribution of the sensible heat flux and the ground heat flux from HRADC is consistent with the GLDAS outputs in summer. In general, the simulated results from HRADC are an improvement on the control simulation and can present the characteristics of the spatial and temporal variation of the water-energy cycle in China.
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
Camera traps can be heard and seen by animals.
Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Experimental comparison of conventional and nonlinear model-based control of a mixing tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeggblom, K.E.
1993-11-01
In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less
Simscape Modeling of a Custom Closed-Volume Tank
NASA Technical Reports Server (NTRS)
Fischer, Nathaniel P.
2015-01-01
The library for Mathworks Simscape does not currently contain a model for a closed volume fluid tank where the ullage pressure is variable. In order to model a closed-volume variable ullage pressure tank, it was necessary to consider at least two separate cases: a vertical cylinder, and a sphere. Using library components, it was possible to construct a rough model for the cylindrical tank. It was not possible to construct a model for a spherical tank, using library components, due to the variable area. It was decided that, for these cases, it would be preferable to create a custom library component to represent each case, using the Simscape language. Once completed, the components were added to models, where filling and draining the tanks could be simulated. When the models were performing as expected, it was necessary to generate code from the models and run them in Trick (a real-time simulation program). The data output from Trick was then compared to the output from Simscape and found to be within acceptable limits.
González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio
2014-01-01
A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340
Digital automatic gain amplifier
NASA Technical Reports Server (NTRS)
Holley, L. D.; Ward, J. O. (Inventor)
1978-01-01
A circuit is described for adjusting the amplitude of a reference signal to a predetermined level so as to permit subsequent data signals to be interpreted correctly. The circuit includes an operational amplifier having a feedback circuit connected between an output terminal and an input terminal; a bank of relays operably connected to a plurality of resistors; and a comparator comparing an output voltage of the amplifier with a reference voltage and generating a compared signal responsive thereto. Means is provided for selectively energizing the relays according to the compared signal from the comparator until the output signal from the amplifier equals to the reference signal. A second comparator is provided for comparing the output of the amplifier with a second voltage source so as to illuminate a lamp when the output signal from the amplifier exceeds the second voltage.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
NASA Astrophysics Data System (ADS)
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes
2011-01-01
Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475
ERIC Educational Resources Information Center
Igra, Amnon
1980-01-01
Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)
Aeolian dunes as ground truth for atmospheric modeling on Mars
Hayward, R.K.; Titus, T.N.; Michaels, T.I.; Fenton, L.K.; Colaprete, A.; Christensen, P.R.
2009-01-01
Martian aeolian dunes preserve a record of atmosphere/surface interaction on a variety of scales, serving as ground truth for both Global Climate Models (GCMs) and mesoscale climate models, such as the Mars Regional Atmospheric Modeling System (MRAMS). We hypothesize that the location of dune fields, expressed globally by geographic distribution and locally by dune centroid azimuth (DCA), may record the long-term integration of atmospheric activity across a broad area, preserving GCM-scale atmospheric trends. In contrast, individual dune morphology, as expressed in slipface orientation (SF), may be more sensitive to localized variations in circulation, preserving topographically controlled mesoscale trends. We test this hypothesis by comparing the geographic distribution, DCA, and SF of dunes with output from the Ames Mars GCM and, at a local study site, with output from MRAMS. When compared to the GCM: 1) dunes generally lie adjacent to areas with strongest winds, 2) DCA agrees fairly well with GCM modeled wind directions in smooth-floored craters, and 3) SF does not agree well with GCM modeled wind directions. When compared to MRAMS modeled winds at our study site: 1) DCA generally coincides with the part of the crater where modeled mean winds are weak, and 2) SFs are consistent with some weak, topographically influenced modeled winds. We conclude that: 1) geographic distribution may be valuable as ground truth for GCMs, 2) DCA may be useful as ground truth for both GCM and mesoscale models, and 3) SF may be useful as ground truth for mesoscale models. Copyright 2009 by the American Geophysical Union.
A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.
Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm
2006-01-01
There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.
Werner, Jan; Griebeler, Eva Maria
2011-01-01
Janis and Carrano (1992) suggested that large dinosaurs might have faced a lower risk of extinction under ecological changes than similar-sized mammals because large dinosaurs had a higher potential reproductive output than similar-sized mammals (JC hypothesis). First, we tested the assumption underlying the JC hypothesis. We therefore analysed the potential reproductive output (reflected in clutch/litter size and annual offspring number) of extant terrestrial mammals and birds (as “dinosaur analogs”) and of extinct dinosaurs. With the exception of rodents, the differences in the reproductive output of similar-sized birds and mammals proposed by Janis and Carrano (1992) existed even at the level of single orders. Fossil dinosaur clutches were larger than litters of similar-sized mammals, and dinosaur clutch sizes were comparable to those of similar-sized birds. Because the extinction risk of extant species often correlates with a low reproductive output, the latter difference suggests a lower risk of population extinction in dinosaurs than in mammals. Second, we present a very simple, mathematical model that demonstrates the advantage of a high reproductive output underlying the JC hypothesis. It predicts that a species with a high reproductive output that usually faces very high juvenile mortalities will benefit more strongly in terms of population size from reduced juvenile mortalities (e.g., resulting from a stochastic reduction in population size) than a species with a low reproductive output that usually comprises low juvenile mortalities. Based on our results, we suggest that reproductive strategy could have contributed to the evolution of the exceptional gigantism seen in dinosaurs that does not exist in extant terrestrial mammals. Large dinosaurs, e.g., the sauropods, may have easily sustained populations of very large-bodied species over evolutionary time. PMID:22194835
Werner, Jan; Griebeler, Eva Maria
2011-01-01
Janis and Carrano (1992) suggested that large dinosaurs might have faced a lower risk of extinction under ecological changes than similar-sized mammals because large dinosaurs had a higher potential reproductive output than similar-sized mammals (JC hypothesis). First, we tested the assumption underlying the JC hypothesis. We therefore analysed the potential reproductive output (reflected in clutch/litter size and annual offspring number) of extant terrestrial mammals and birds (as "dinosaur analogs") and of extinct dinosaurs. With the exception of rodents, the differences in the reproductive output of similar-sized birds and mammals proposed by Janis and Carrano (1992) existed even at the level of single orders. Fossil dinosaur clutches were larger than litters of similar-sized mammals, and dinosaur clutch sizes were comparable to those of similar-sized birds. Because the extinction risk of extant species often correlates with a low reproductive output, the latter difference suggests a lower risk of population extinction in dinosaurs than in mammals. Second, we present a very simple, mathematical model that demonstrates the advantage of a high reproductive output underlying the JC hypothesis. It predicts that a species with a high reproductive output that usually faces very high juvenile mortalities will benefit more strongly in terms of population size from reduced juvenile mortalities (e.g., resulting from a stochastic reduction in population size) than a species with a low reproductive output that usually comprises low juvenile mortalities. Based on our results, we suggest that reproductive strategy could have contributed to the evolution of the exceptional gigantism seen in dinosaurs that does not exist in extant terrestrial mammals. Large dinosaurs, e.g., the sauropods, may have easily sustained populations of very large-bodied species over evolutionary time.
Modelling and experimental study of temperature profiles in cw laser diode bars
NASA Astrophysics Data System (ADS)
Bezotosnyi, V. V.; Gordeev, V. P.; Krokhin, O. N.; Mikaelyan, G. T.; Oleshchenko, V. A.; Pevtsov, V. F.; Popov, Yu M.; Cheshev, E. A.
2018-02-01
Three-dimensional simulation is used to theoretically assess temperature profiles in proposed 10-mm-wide cw laser diode bars packaged in a standard heat spreader of the C - S mount type with the aim of raising their reliable cw output power. We obtain calculated temperature differences across the emitting aperture and along the cavity. Using experimental laser bar samples with up to 60 W of cw output power, the emission spectra of individual clusters are measured at different pump currents. We compare and discuss the simulation results and experimental data.
Multi-channel temperature measurement amplification system. [solar heating systems
NASA Technical Reports Server (NTRS)
Currie, J. R. (Inventor)
1981-01-01
A number of differential outputs of thermocouples are sequentially amplified by a common amplifier. The amplified outputs are compared with a reference temperature signal in an offset correction amplifier, and a particularly poled output signal is provided when a differential output is of a discrete level compared with a reference temperature signal.
NASA Astrophysics Data System (ADS)
Dmitriev, S. S.; Vasil'ev, K. E.; Mokhamed, S. M. S. O.; Gusev, A. A.; Barbashin, A. V.
2017-11-01
In modern combined cycle gas turbines (CCGT), when designing the reducers from the output diffuser of a gas turbine to a boiler-utilizer, wide-angle diffusers are used, in which practically from the input a flow separation and transition to jet stream regime occurs. In such channels, the energy loss in the field of velocities sharply rise and the field of velocities in the output from them is characterized by considerable unevenness that worsens the heat transfer process in the first by motion tube bundles of the boiler-utilizer. The results of experimental research of the method for reducing the energy loss and alignment of the field of velocities at the output from a flat asymmetrical diffuser channel with one deflecting wall with the opening angle of 40° by means of placing inside the channel the flat plate parallel to the deflecting wall are presented in the paper. It is revealed that, at this placement of the plate in the channel, it has a chance to reduce the energy loss by 20%, considerably align the output field of velocities, and decrease the dynamic loads on the walls in the output cross-section. The studied method of resistance reduction and alignment of the fields of velocities in the flat diffuser channels was used for optimization of the reducer from the output diffuser of the gas turbine to the boiler-utilizer of CCGT of PGU-450T type of Kaliningrad Thermal Power Plant-2. The obtained results are evidence that the configuration of the reducer installed in the PGU-450T of Kaliningrad Thermal Power Plant-2 is not optimal. It follows also from the obtained data that working-off the reducer should be necessarily conducted by the test results of the channel consisting of the model of reducer with the model of boiler-utilizer installed behind it. Application of the method of alignment of output field of velocities and reducing the resistance in the wide-angle diffusers investigated in the work made it possible—when using the known model of diffusion reducer for PGU-450T, which is bad from the standpoint of aerodynamics— to reduce the value of the coefficient of the total loss by almost 20% as compared with the model of real reducer of PGU-450T.
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
Fligor, Brian J; Cox, L Clarke
2004-12-01
To measure the sound levels generated by the headphones of commercially available portable compact disc players and provide hearing healthcare providers with safety guidelines based on a theoretical noise dose model. Using a Knowles Electronics Manikin for Acoustical Research and a personal computer, output levels across volume control settings were recorded from headphones driven by a standard signal (white noise) and compared with output levels from music samples of eight different genres. Many commercially available models from different manufacturers were investigated. Several different styles of headphones (insert, supra-aural, vertical, and circumaural) were used to determine if style of headphone influenced output level. Free-field equivalent sound pressure levels measured at maximum volume control setting ranged from 91 dBA to 121 dBA. Output levels varied across manufacturers and style of headphone, although generally the smaller the headphone, the higher the sound level for a given volume control setting. Specifically, in one manufacturer, insert earphones increased output level 7-9 dB, relative to the output from stock headphones included in the purchase of the CD player. In a few headphone-CD player combinations, peak sound pressure levels exceeded 130 dB SPL. Based on measured sound pressure levels across systems and the noise dose model recommended by National Institute for Occupational Safety and Health for protecting the occupational worker, a maximum permissible noise dose would typically be reached within 1 hr of listening with the volume control set to 70% of maximum gain using supra-aural headphones. Using headphones that resulted in boosting the output level (e.g., insert earphones used in this study) would significantly decrease the maximum safe volume control setting; this effect was unpredictable from one manufacturer to another. In the interest of providing a straightforward recommendation that should protect the hearing of the majority of consumers, reasonable guidelines would include a recommendation to limit headphone use to 1 hr or less per day if using supra-aural style headphones at a gain control setting of 60% of maximum.
Asafu-Adjei, Josephine; Betensky, Rebecca A.; Palevsky, Paul M.; Waikar, Sushrut S.
2016-01-01
Background and objectives Intensive RRT may have adverse effects that account for the absence of benefit observed in randomized trials of more intensive versus less intensive RRT. We wished to determine the association of more intensive RRT with changes in urine output as a marker of worsening residual renal function in critically ill patients with severe AKI. Design, setting, participants, & measurements The Acute Renal Failure Trial Network Study (n=1124) was a multicenter trial that randomized critically ill patients requiring initiation of RRT to more intensive (hemodialysis or sustained low–efficiency dialysis six times per week or continuous venovenous hemodiafiltration at 35 ml/kg per hour) versus less intensive (hemodialysis or sustained low–efficiency dialysis three times per week or continuous venovenous hemodiafiltration at 20 ml/kg per hour) RRT. Mixed linear regression models were fit to estimate the association of RRT intensity with change in daily urine output in survivors through day 7 (n=871); Cox regression models were fit to determine the association of RRT intensity with time to ≥50% decline in urine output in all patients through day 28. Results Mean age of participants was 60±15 years old, 72% were men, and 30% were diabetic. In unadjusted models, among patients who survived ≥7 days, mean urine output was, on average, 31.7 ml/d higher (95% confidence interval, 8.2 to 55.2 ml/d) for the less intensive group compared with the more intensive group (P=0.01). More intensive RRT was associated with 29% greater unadjusted risk of decline in urine output of ≥50% (hazard ratio, 1.29; 95% confidence interval, 1.10 to 1.51). Conclusions More intensive versus less intensive RRT is associated with a greater reduction in urine output during the first 7 days of therapy and a greater risk of developing a decline in urine output of ≥50% in critically ill patients with severe AKI. PMID:27449661
Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2015-04-01
Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.
NASA Technical Reports Server (NTRS)
Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Cunningham, Thomas J. (Inventor); Newton, Kenneth W. (Inventor)
2014-01-01
An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.
NASA Astrophysics Data System (ADS)
Drapek, R. J.; Kim, J. B.
2013-12-01
We simulated ecosystem response to climate change in the USA and Canada at a 5 arc-minute grid resolution using the MC1 dynamic global vegetation model and nine CMIP3 future climate projections as input. The climate projections were produced by 3 GCMs simulating 3 SRES emissions scenarios. We examined MC1 outputs for the conterminous USA by summarizing them by EPA level II and III ecoregions to characterize model skill and evaluate the magnitude and uncertainties of simulated ecosystem response to climate change. First, we evaluated model skill by comparing outputs from the recent historical period with benchmark datasets. Distribution of potential natural vegetation simulated by MC1 was compared with Kuchler's map. Above ground live carbon simulated by MC1 was compared with the National Biomass and Carbon Dataset. Fire return intervals calculated by MC1 were compared with maximum and minimum values compiled for the United States. Each EPA Level III Ecoregion was scored for average agreement with corresponding benchmark data and an average score was calculated for all three types of output. Greatest agreement with benchmark data happened in the Western Cordillera, the Ozark / Ouachita-Appalachian Forests, and the Southeastern USA Plains (EPA Level II Ecoregions). The lowest agreement happened in the Everglades and the Tamaulipas-Texas Semiarid Plain. For simulated ecosystem response to future climate projections we examined MC1 output for shifts in vegetation type, vegetation carbon, runoff, and biomass consumed by fire. Each ecoregion was scored for the amount of change from historical conditions for each variable and an average score was calculated. Smallest changes were forecast for Western Cordillera and Marine West Coast Forest ecosystems. Largest changes were forecast for the Cold Deserts, the Mixed Wood Plains, and the Central USA Plains. By combining scores of model skill for the historical period for each EPA Level 3 Ecoregion with scores representing the magnitude of ecosystem changes in the future, we identified high and low uncertainty ecoregions. The largest anticipated changes and the lowest measures of model skill coincide in the Central USA Plains and the Mixed Wood Plains. The combination of low model skill and high degree of ecosystem change elevate the importance of our uncertainty in this ecoregion. The highest projected changes coincide with relatively high model skill in the Cold Deserts. Climate adaptation efforts are the most likely to pay off in these regions. Finally, highest model skill and lowest anticipated changes coincide in the Western Cordillera and the Marine West Coast Forests. These regions may be relatively low-risk for climate change impacts when compared to the other ecoregions. These results represent only the first step in this type of analysis; there exist many ways to strengthen it. One, MC1 calibrations can be optimized using a structured optimization technique. Two, a larger set of climate projections can be used to capture a fuller range of GCMs and emissions scenarios. And three, employing an ensemble of vegetation models would make the analysis more robust.
An optimal design of magnetostrictive material (MsM) based energy harvester
NASA Astrophysics Data System (ADS)
Hu, Jingzhen; Yuan, Fuh-Gwo; Xu, Fujun; Huang, Alex Q.
2010-04-01
In this study, an optimal vibration-based energy harvesting system using magnetostrictive material (MsM) has been designed to power the Wireless Intelligent Sensor Platform (WISP), developed at North Carolina State University. A linear MsM energy harvesting device has been modeled and optimized to maximize the power output. The effects of number of MsM layers and glue layers, and load matching on the output power of the MsM energy harvester have been analyzed. From the measurement, the open circuit voltage can reach 1.5 V when the MsM cantilever beam operates at the 2nd natural frequency 324 Hz. The AC output power is 0.97 mW, giving power density 279 μW/cm3. Since the MsM device has low open circuit output voltage characteristics, a full-wave quadrupler has been designed to boost the rectified output voltage. To deliver the maximum output power to the load, a complex conjugate impedance matching between the load and the MsM device has been implemented using a discontinuous conduction mode (DCM) buck-boost converter. The maximum output power after the voltage quadrupler is now 705 μW and power density reduces to 202.4 μW/cm3, which is comparable to the piezoelectric energy harvesters given in the literature. The output power delivered to a lithium rechargeable battery is around 630 μW, independent of the load resistance.
Three models intercomparison for Quantitative Precipitation Forecast over Calabria
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.
2004-11-01
In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.
2015-03-01
statistically significant increase in systemic vascular resistance compared to control, but not whole blood, with a concomitant decrease in cardiac...increasing blood pressure as well as sys- temic vascular resistance in a hypovolemic hemorrhagic swine model.18 The primary hypothesis of this study is...output, sys- temic vascular resistance , mixed venous oxygen satura- tion, central venous pressure, pulmonary artery pressure, and core temperature. The
Computer simulations of neural mechanisms explaining upper and lower limb excitatory neural coupling
2010-01-01
Background When humans perform rhythmic upper and lower limb locomotor-like movements, there is an excitatory effect of upper limb exertion on lower limb muscle recruitment. To investigate potential neural mechanisms for this behavioral observation, we developed computer simulations modeling interlimb neural pathways among central pattern generators. We hypothesized that enhancement of muscle recruitment from interlimb spinal mechanisms was not sufficient to explain muscle enhancement levels observed in experimental data. Methods We used Matsuoka oscillators for the central pattern generators (CPG) and determined parameters that enhanced amplitudes of rhythmic steady state bursts. Potential mechanisms for output enhancement were excitatory and inhibitory sensory feedback gains, excitatory and inhibitory interlimb coupling gains, and coupling geometry. We first simulated the simplest case, a single CPG, and then expanded the model to have two CPGs and lastly four CPGs. In the two and four CPG models, the lower limb CPGs did not receive supraspinal input such that the only mechanisms available for enhancing output were interlimb coupling gains and sensory feedback gains. Results In a two-CPG model with inhibitory sensory feedback gains, only excitatory gains of ipsilateral flexor-extensor/extensor-flexor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 26%. In a two-CPG model with excitatory sensory feedback gains, excitatory gains of contralateral flexor-flexor/extensor-extensor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 100%. However, within a given excitatory sensory feedback gain, enhancement due to excitatory interlimb gains could only reach levels up to 20%. Interconnecting four CPGs to have ipsilateral flexor-extensor/extensor-flexor coupling, contralateral flexor-flexor/extensor-extensor coupling, and bilateral flexor-extensor/extensor-flexor coupling could enhance motor output up to 32%. Enhancement observed in experimental data exceeded 32%. Enhancement within this symmetrical four-CPG neural architecture was more sensitive to relatively small interlimb coupling gains. Excitatory sensory feedback gains could produce greater output amplitudes, but larger gains were required for entrainment compared to inhibitory sensory feedback gains. Conclusions Based on these simulations, symmetrical interlimb coupling can account for much, but not all of the excitatory neural coupling between upper and lower limbs during rhythmic locomotor-like movements. PMID:21143960
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Astrophysics Data System (ADS)
Kim, J. B.; Kerns, B. K.; Halofsky, J.
2014-12-01
GCM-based climate projections and downscaled climate data proliferate, and there are many climate-aware vegetation models in use by researchers. Yet application of fine-scale DGVM based simulation output in national forest vulnerability assessments is not common, because there are technical, administrative and social barriers for their use by managers and policy makers. As part of a science-management climate change adaptation partnership, we performed simulations of vegetation response to climate change for four national forests in the Blue Mountains of Oregon using the MC2 dynamic global vegetation model (DGVM) for use in vulnerability assessments. Our simulation results under business-as-usual scenarios suggest a starkly different future forest conditions for three out of the four national forests in the study area, making their adoption by forest managers a potential challenge. However, using DGVM output to structure discussion of potential vegetation changes provides a suitable framework to discuss the dynamic nature of vegetation change compared to using more commonly available model output (e.g. species distribution models). From the onset, we planned and coordinated our work with national forest managers to maximize the utility and the consideration of the simulation results in planning. Key lessons from this collaboration were: (1) structured and strategic selection of a small number climate change scenarios that capture the range of variability in future conditions simplified results; (2) collecting and integrating data from managers for use in simulations increased support and interest in applying output; (3) a structured, regionally focused, and hierarchical calibration of the DGVM produced well-validated results; (4) simple approaches to quantifying uncertainty in simulation results facilitated communication; and (5) interpretation of model results in a holistic context in relation to multiple lines of evidence produced balanced guidance. This latest point demonstrates the importance of using model out as a forum for discussion along with other information, rather than using model output in an inappropriately predictive sense. These lessons are being applied currently to other national forests in the Pacific Northwest to contribute in vulnerability assessments.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin
2017-06-01
Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
Radiative transfer model validations during the First ISLSCP Field Experiment
NASA Technical Reports Server (NTRS)
Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine
1990-01-01
Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.
Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.
NASA Astrophysics Data System (ADS)
Vogelmann, A. M.; Gustafson, W. I., Jr.; Toto, T.; Endo, S.; Cheng, X.; Li, Z.; Xiao, H.
2015-12-01
The Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facilities' Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) Workflow is currently being designed to provide output from routine LES to complement its extensive observations. The modeling portion of the LASSO workflow is presented by Gustafson et al., which will initially focus on shallow convection over the ARM megasite in Oklahoma, USA. This presentation describes how the LES output will be combined with observations to construct multi-dimensional and dynamically consistent "data cubes", aimed at providing the best description of the atmospheric state for use in analyses by the community. The megasite observations are used to constrain large-eddy simulations that provide a complete spatial and temporal coverage of observables and, further, the simulations also provide information on processes that cannot be observed. Statistical comparisons of model output with their observables are used to assess the quality of a given simulated realization and its associated uncertainties. A data cube is a model-observation package that provides: (1) metrics of model-observation statistical summaries to assess the simulations and the ensemble spread; (2) statistical summaries of additional model property output that cannot be or are very difficult to observe; and (3) snapshots of the 4-D simulated fields from the integration period. Searchable metrics are provided that characterize the general atmospheric state to assist users in finding cases of interest, such as categorization of daily weather conditions and their specific attributes. The data cubes will be accompanied by tools designed for easy access to cube contents from within the ARM archive and externally, the ability to compare multiple data streams within an event as well as across events, and the ability to use common grids and time sampling, where appropriate.
Grossöhmichen, Martin; Salcher, Rolf; Lenarz, Thomas; Maier, Hannes
2016-08-01
The electromagnetic transducers of implantable middle ear hearing devices or direct acoustic cochlear implants (DACIs) are intended for implantation in an air-filled middle ear cavity. When implanted in an obliterated radical mastoid cavity, they would be surrounded by fatty tissue of unknown elastic properties, potentially attenuating the mechanical output. Here, the elastic properties of this tissue were determined experimentally and the vibrational output of commonly used electromagnetic transducers in an obliterated radical mastoid cavity was investigated in vitro using a newly developed method. The Young's moduli of human fatty tissue samples (3-mm diameter), taken fresh from the abdomen or from the radical mastoid cavity during revision surgeries, were determined by indentation tests. Two phantom materials having Young's moduli similar to and higher than (worst case scenario) the tissue were identified. The displacement output of a DACI, a middle ear transducer (MET) and a floating mass transducer (FMT), was measured when embedded in the phantom materials in a model radical cavity and compared with the output of the nonembedded transducers. The here-determined Young's moduli of fresh human abdominal fatty tissue were comparable to the moduli of human breast fat tissue. When embedded in the phantom materials, the displacement output amplitude at 0.1 to 10 kHz of the DACI and MET was attenuated by maximally 5 dB. The attenuation of the output of the FMT was also minor at 0.5 to 10 kHz, but significantly reduced by up to 35 dB at lower frequencies. Using the method developed here, the Young's moduli of small soft tissue samples could be estimated and the effect of obliteration on the mechanical output of electromagnetic transducers was investigated in vitro. Our results demonstrate that the decrease in vibrational output of the DACI and MET in obliterated mastoid cavities is expected to be minor, having no major impact on clinical indication. Although no major attenuation of vibrational output of the FMT was found for frequencies >0.5 kHz, for implantations in patients the attenuation at frequencies <0.5 kHz may have to be taken into account.
Efficiency measurement and the operationalization of hospital production.
Magnussen, J
1996-01-01
OBJECTIVE. To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. STUDY SETTING. Norwegian hospitals over the three-year period 1989-1991. STUDY DESIGN. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. DATA COLLECTION. Input and output data are collected by the Norwegian Central Bureau of Statistics. PRINCIPAL FINDINGS. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. CONCLUSION. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile. PMID:8617607
Numerical considerations in the development and implementation of constitutive models
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Imbrie, P. K.
1985-01-01
Several unified constitutive models were tested in uniaxial form by specifying input strain histories and comparing output stress histories. The purpose of the tests was to evaluate several time integration methods with regard to accuracy, stability, and computational economy. The sensitivity of the models to slight changes in input constants was also investigated. Results are presented for In100 at 1350 F and Hastelloy-X at 1800 F.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Baniukiewicz, P.
2010-02-22
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Twomore » weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.« less
NASA Astrophysics Data System (ADS)
Sikora, R.; Chady, T.; Baniukiewicz, P.; Caryk, M.; Piekarczyk, B.
2010-02-01
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Two weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
NASA Astrophysics Data System (ADS)
Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.
2017-10-01
Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.
Application of variable-gain output feedback for high-alpha control
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1990-01-01
A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.
Parameter reduction in nonlinear state-space identification of hysteresis
NASA Astrophysics Data System (ADS)
Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan
2018-05-01
Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.
Logic elements for reactor period meter
McDowell, William P.; Bobis, James P.
1976-01-01
Logic elements are provided for a reactor period meter trip circuit. For one element, first and second inputs are applied to first and second chopper comparators, respectively. The output of each comparator is O if the input applied to it is greater than or equal to a trip level associated with each input and each output is a square wave of frequency f if the input applied to it is less than the associated trip level. The outputs of the comparators are algebraically summed and applied to a bandpass filter tuned to f. For another element, the output of each comparator is applied to a bandpass filter which is tuned to f to give a sine wave of frequency f. The outputs of the filters are multiplied by an analog multiplier whose output is 0 if either input is 0 and a sine wave of frequency 2f if both inputs are a frequency f.
Prediction of Layer Thickness in Molten Borax Bath with Genetic Evolutionary Programming
NASA Astrophysics Data System (ADS)
Taylan, Fatih
2011-04-01
In this study, the vanadium carbide coating in molten borax bath process is modeled by evolutionary genetic programming (GEP) with bath composition (borax percentage, ferro vanadium (Fe-V) percentage, boric acid percentage), bath temperature, immersion time, and layer thickness data. Five inputs and one output data exist in the model. The percentage of borax, Fe-V, and boric acid, temperature, and immersion time parameters are used as input data and the layer thickness value is used as output data. For selected bath components, immersion time, and temperature variables, the layer thicknesses are derived from the mathematical expression. The results of the mathematical expressions are compared to that of experimental data; it is determined that the derived mathematical expression has an accuracy of 89%.
The direct effects of gravity on the control and output matrices of controlled structure models
NASA Technical Reports Server (NTRS)
Rey, Daniel A.; Alexander, Harold L.; Crawley, Edward F.
1992-01-01
The effects of gravity on the dynamic performance of structural control actuators and sensors are dual forms of an additive perturbation that can attenuate or amplify the device response (input or output). The modal modeling of these perturbations is derived for the general case of arbitrarily oriented devices and arbitrarily oriented planes of deformation. A nondimensional sensitivity analysis to identify the circumstances under which the effects of gravity are important is presented. Results show that gravity effects become important when the product of the ratio of the normalized modal slope and the modal displacement is comparable to the ratio of the gravitational acceleration and the product of the beam length and the squared eigenfrequency for a given mode.
Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela
2018-02-01
The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
NASA Astrophysics Data System (ADS)
Liu, Z.; Rajib, M. A.; Jafarzadegan, K.; Merwade, V.
2015-12-01
Application of land surface/hydrologic models within an operational flood forecasting system can provide probable time of occurrence and magnitude of streamflow at specific locations along a stream. Creating time-varying spatial extent of flood inundation and depth requires the use of a hydraulic or hydrodynamic model. Models differ in representing river geometry and surface roughness which can lead to different output depending on the particular model being used. The result from a single hydraulic model provides just one possible realization of the flood extent without capturing the uncertainty associated with the input or the model parameters. The objective of this study is to compare multiple hydraulic models toward generating ensemble flood inundation extents. Specifically, relative performances of four hydraulic models, including AutoRoute, HEC-RAS, HEC-RAS 2D, and LISFLOOD are evaluated under different geophysical conditions in several locations across the United States. By using streamflow output from the same hydrologic model (SWAT in this case), hydraulic simulations are conducted for three configurations: (i) hindcasting mode by using past observed weather data at daily time scale in which models are being calibrated against USGS streamflow observations, (ii) validation mode using near real-time weather data at sub-daily time scale, and (iii) design mode with extreme streamflow data having specific return periods. Model generated inundation maps for observed flood events both from hindcasting and validation modes are compared with remotely sensed images, whereas the design mode outcomes are compared with corresponding FEMA generated flood hazard maps. The comparisons presented here will give insights on probable model-specific nature of biases and their relative advantages/disadvantages as components of an operational flood forecasting system.
Efficacy of a new intraaortic propeller pump vs the intraaortic balloon pump: an animal study.
Dekker, André; Reesink, Koen; van der Veen, Erik; Van Ommen, Vincent; Geskes, Gijs; Soemers, Cecile; Maessen, Jos
2003-06-01
To compare the efficacy of a new intraaortic propeller pump (PP) to provide hemodynamic support to the intraaortic balloon pump (IABP) in an acute mitral regurgitation (MR) animal model. A new intraaortic PP (Reitan catheter pump; Jomed; Helsingborg, Sweden) recently has been introduced. The pump's aim is a reduction in afterload via a deployable propeller that is placed in the high descending aorta and can be set at rotational speeds of
NASA Astrophysics Data System (ADS)
Terando, A. J.; Grade, S.; Bowden, J.; Henareh Khalyani, A.; Wootten, A.; Misra, V.; Collazo, J.; Gould, W. A.; Boyles, R.
2016-12-01
Sub-tropical island nations may be particularly vulnerable to anthropogenic climate change because of predicted changes in the hydrologic cycle that would lead to significant drying in the future. However, decision makers in these regions have seen their adaptation planning efforts frustrated by the lack of island-resolving climate model information. Recently, two investigations have used statistical and dynamical downscaling techniques to develop climate change projections for the U.S. Caribbean region (Puerto Rico and U.S. Virgin Islands). We compare the results from these two studies with respect to three commonly downscaled CMIP5 global climate models (GCMs). The GCMs were dynamically downscaled at a convective-permitting scale using two different regional climate models. The statistical downscaling approach was conducted at locations with long-term climate observations and then further post-processed using climatologically aided interpolation (yielding two sets of projections). Overall, both approaches face unique challenges. The statistical approach suffers from a lack of observations necessary to constrain the model, particularly at the land-ocean boundary and in complex terrain. The dynamically downscaled model output has a systematic dry bias over the island despite ample availability of moisture in the atmospheric column. Notwithstanding these differences, both approaches are consistent in projecting a drier climate that is driven by the strong global-scale anthropogenic forcing.
Using the split Hopkinson pressure bar to validate material models.
Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian
2014-08-28
This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
MacAyeal, D. R.; Rignot, E.; Hulbe, C. L.
1998-01-01
We compare Earth Remote Sensing (ERS) satellite synthetic-aperture radar (SAR) interferograms with artificial interferograms constructed using output of a finite-element ice-shelf flow model to study the dynamics of Filchner-Ronne Ice Shelf (FRIS) near Hemmen Ice Rise (HIR) where the iceberg-calving front itersects Berkener Island (BI).
Cognitive Task Complexity and Written Output in Italian and French as a Foreign Language
ERIC Educational Resources Information Center
Kuiken, Folkert; Vedder, Ineke
2008-01-01
This paper reports on a study on the relationship between cognitive task complexity and linguistic performance in L2 writing. In the study, two models proposed to explain the influence of cognitive task complexity on linguistic performance in L2 are tested and compared: Skehan and Foster's Limited Attentional Capacity Model (Skehan, 1998; Skehan…
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
Marshall, F.E.; Wingard, G.L.
2012-01-01
The upgraded method of coupled paleosalinity and hydrologic models was applied to the analysis of the circa-1900 CE segments of five estuarine sediment cores collected in Florida Bay. Comparisons of the observed mean stage (water level) data to the paleoecology-based model's averaged output show that the estimated stage in the Everglades wetlands was 0.3 to 1.6 feet higher at different locations. Observed mean flow data compared to the paleoecology-based model output show an estimated flow into Shark River Slough at Tamiami Trail of 401 to 2,539 cubic feet per second (cfs) higher than existing flows, and at Taylor Slough Bridge an estimated flow of 48 to 218 cfs above existing flows. For salinity in Florida Bay, the difference between paleoecology-based and observed mean salinity varies across the bay, from an aggregated average salinity of 14.7 less than existing in the northeastern basin to 1.0 less than existing in the western basin near the transition into the Gulf of Mexico. When the salinity differences are compared by region, the difference between paleoecology-based conditions and existing conditions are spatially consistent.
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.
Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G
2014-01-01
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Exercise efficiency of low power output cycling.
Reger, M; Peterman, J E; Kram, R; Byrnes, W C
2013-12-01
Exercise efficiency at low power outputs, energetically comparable to daily living activities, can be influenced by homeostatic perturbations (e.g., weight gain/loss). However, an appropriate efficiency calculation for low power outputs used in these studies has not been determined. Fifteen active subjects (seven females, eight males) performed 14, 5-min cycling trials: two types of seated rest (cranks vertical and horizontal), passive (motor-driven) cycling, no-chain cycling, no-load cycling, cycling at low (10, 20, 30, 40 W), and moderate (50, 60, 80, 100, 120 W) power outputs. Mean delta efficiency was 57% for low power outputs compared to 41.3% for moderate power outputs. Means for gross (3.6%) and net (5.7%) efficiencies were low at the lowest power output. At low power outputs, delta and work efficiency values exceeded theoretical values. In conclusion, at low power outputs, none of the common exercise efficiency calculations gave values comparable to theoretical muscle efficiency. However, gross efficiency and the slope and intercept of the metabolic power vs mechanical power output regression provide insights that are still valuable when studying homeostatic perturbations. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Research on the Dynamic Hysteresis Loop Model of the Residence Times Difference (RTD)-Fluxgate
Wang, Yanzhang; Wu, Shujun; Zhou, Zhijian; Cheng, Defu; Pang, Na; Wan, Yunxia
2013-01-01
Based on the core hysteresis features, the RTD-fluxgate core, while working, is repeatedly saturated with excitation field. When the fluxgate simulates, the accurate characteristic model of the core may provide a precise simulation result. As the shape of the ideal hysteresis loop model is fixed, it cannot accurately reflect the actual dynamic changing rules of the hysteresis loop. In order to improve the fluxgate simulation accuracy, a dynamic hysteresis loop model containing the parameters which have actual physical meanings is proposed based on the changing rule of the permeability parameter when the fluxgate is working. Compared with the ideal hysteresis loop model, this model has considered the dynamic features of the hysteresis loop, which makes the simulation results closer to the actual output. In addition, other hysteresis loops of different magnetic materials can be explained utilizing the described model for an example of amorphous magnetic material in this manuscript. The model has been validated by the output response comparison between experiment results and fitting results using the model. PMID:24002230
A cellular automata model of Ebola virus dynamics
NASA Astrophysics Data System (ADS)
Burkhead, Emily; Hawkins, Jane
2015-11-01
We construct a stochastic cellular automaton (SCA) model for the spread of the Ebola virus (EBOV). We make substantial modifications to an existing SCA model used for HIV, introduced by others and studied by the authors. We give a rigorous analysis of the similarities between models due to the spread of virus and the typical immune response to it, and the differences which reflect the drastically different timing of the course of EBOV. We demonstrate output from the model and compare it with clinical data.
A Simple Sensor Model for THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Bryant, Robert G.
2009-01-01
A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.
Can Geoengineering Effectively Reduce the Land Warming?
NASA Astrophysics Data System (ADS)
Wang, W.; MacMartin, D.; Moore, J. C.; Ji, D.
2017-12-01
Permafrost, defined as ground that remains at or below 0 C for two or more consecutive years, underlies 24% of the land in the Northern Hemisphere. Under recent climate warming, permafrost has begun to thaw, causing changes in ecosystems and impacting northern communities. Using the multiple land model output from the Permafrost Carbon Network and applying 5 commonly used permafrost diagnostic methods, we assess the projected Northern Hemisphere permafrost area under RCP 8.5 scenario. Both the air and soil relative warming change is compared to highlight the soil warming pattern and intensity. Using the multiple Earth System Models output under abrupt 4×CO2, G1, PI-control, G3, G4, and RCP4.5 experiments, a preliminary attempt is also performed to examine the effectiveness of geoengineering schemes on reducing the land warming. Although there is uncertainty in the projected results due to model and method difference, the soil temperature based methods derived permafrost all present an intense decrease by 48% - 68% until 2100. The projected soil temperature by the more physically complicated model shows a different warming pattern compared with the air, which indicates that some potential land process intervene with the land response to atmospheric change. The simulated soil temperature can be effectively cooled down by 2 - 9 degree under G1 compared with abrupt 4×CO2, and by less than 4 degree under G3 and G4 compared with RCP4.5.
Spectral characterization of the LANDSAT-D multispectral scanner subsystems
NASA Technical Reports Server (NTRS)
Markham, B. L. (Principal Investigator); Barker, J. L.
1982-01-01
Relative spectral response data for the multispectral scanner subsystems (MSS) to be flown on LANDSAT-D and LANDSAT-D backup, the protoflight and flight models, respectively, are presented and compared to similar data for the Landsat 1,2, and 3 subsystems. Channel-bychannel (six channels per band) outputs for soil and soybean targets were simulated and compared within each band and between scanners. The two LANDSAT-D scanners proved to be nearly identical in mean spectral response, but they exhibited some differences from the previous MSS's. Principal differences between the spectral responses of the D-scanners and previous scanners were: (1) a mean upper-band edge in the green band of 606 nm compared to previous means of 593 to 598 nm; (2) an average upper-band edge of 697 nm in the red band compared to previous averages of 701 to 710 nm; and (3) an average bandpass for the first near-IR band of 702-814 nm compared to a range of 693-793 to 697-802 nm for previous scanners. These differences caused the simulated D-scanner outputs to be 3 to 10 percent lower in the red band and 3 to 11 percent higher in the first near-IR band than previous scanners for the soybeans target. Otherwise, outputs from soil and soybean targets were only slightly affected. The D-scanners were generally more uniform from channel to channel within bands than previous scanners.
Reduced order modeling and active flow control of an inlet duct
NASA Astrophysics Data System (ADS)
Ge, Xiaoqing
Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, S; Ahmad, S; Chen, Y
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less
Comparisons between data assimilated HYCOM output and in situ Argo measurements in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Wilson, E. A.; Riser, S.
2014-12-01
This study evaluates the performance of data assimilated Hybrid Coordinate Ocean Model (HYCOM) output for the Bay of Bengal from September 2008 through July 2013. We find that while HYCOM assimilates Argo data, the model still suffers from significant temperature and salinity biases in this region. These biases are most severe in the northern Bay of Bengal, where the model tends to be too saline near the surface and too fresh at depth. The maximum magnitude of these biases is approximately 0.6 PSS. We also find that the model's salinity biases have a distinct seasonal cycle. The most problematic periods are the months following the summer monsoon (Oct-Jan). HYCOM's near surface temperature estimates compare more favorably with Argo, but significant errors exist at deeper levels. We argue that optimal interpolation will tend to induce positive salinity biases in the northern regions of the Bay. Further, we speculate that these biases are introduced when the model relaxes to climatology and assimilates real-time data.
NASA Technical Reports Server (NTRS)
Shafer, Jaclyn; Watson, Leela R.
2015-01-01
NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.
NASA Astrophysics Data System (ADS)
Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui
2017-03-01
A theoretical model based on common pump structure is proposed to analyze the output characteristics of a diode-pumped alkali vapor laser (DPAL) and XPAL (exciplex-pumped alkali laser). Cs-DPAL and Cs-Ar XPAL systems are used as examples. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-level XPAL systems with broadband pumping, which is several times the pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell's length, mixed gas concentration, pumped linewidth, and output coupler are analyzed for DPAL and XPAL systems based on the kinetic model. In addition, the predictions of selection principal of temperature and cell's length are also presented. The concept of the equivalent "alkali areal density" is proposed. The result shows that the output characteristics with the same alkali areal density but different temperatures turn out to be equal for either the DPAL or the XPAL system. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented.
A Water-Withdrawal Input-Output Model of the Indian Economy.
Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu
2016-02-02
Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.
Camera Traps Can Be Heard and Seen by Animals
Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356
A fault injection experiment using the AIRLAB Diagnostic Emulation Facility
NASA Technical Reports Server (NTRS)
Baker, Robert; Mangum, Scott; Scheper, Charlotte
1988-01-01
The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.
A software tool for determination of breast cancer treatment methods using data mining approach.
Cakır, Abdülkadir; Demirel, Burçin
2011-12-01
In this work, breast cancer treatment methods are determined using data mining. For this purpose, software is developed to help to oncology doctor for the suggestion of application of the treatment methods about breast cancer patients. 462 breast cancer patient data, obtained from Ankara Oncology Hospital, are used to determine treatment methods for new patients. This dataset is processed with Weka data mining tool. Classification algorithms are applied one by one for this dataset and results are compared to find proper treatment method. Developed software program called as "Treatment Assistant" uses different algorithms (IB1, Multilayer Perception and Decision Table) to find out which one is giving better result for each attribute to predict and by using Java Net beans interface. Treatment methods are determined for the post surgical operation of breast cancer patients using this developed software tool. At modeling step of data mining process, different Weka algorithms are used for output attributes. For hormonotherapy output IB1, for tamoxifen and radiotherapy outputs Multilayer Perceptron and for the chemotherapy output decision table algorithm shows best accuracy performance compare to each other. In conclusion, this work shows that data mining approach can be a useful tool for medical applications particularly at the treatment decision step. Data mining helps to the doctor to decide in a short time.
Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks
2018-01-01
Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603
Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Kenward, T.; Lettenmaier, D. P.
1997-01-01
The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua
2014-10-01
This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less
Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla
2015-01-01
Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
Pandemic recovery analysis using the dynamic inoperability input-output model.
Santos, Joost R; Orsi, Mark J; Bond, Erik J
2009-12-01
Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.
Bayesian Processor of Output for Probabilistic Quantitative Precipitation Forecasting
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.; Maranzano, C. J.
2006-05-01
The Bayesian Processor of Output (BPO) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It processes output from a numerical weather prediction (NWP) model and optimally fuses it with climatic data in order to quantify uncertainty about a predictand. The BPO is being tested by producing Probabilistic Quantitative Precipitation Forecasts (PQPFs) for a set of climatically diverse stations in the contiguous U.S. For each station, the PQPFs are produced for the same 6-h, 12-h, and 24-h periods up to 84- h ahead for which operational forecasts are produced by the AVN-MOS (Model Output Statistics technique applied to output fields from the Global Spectral Model run under the code name AVN). The inputs into the BPO are estimated as follows. The prior distribution is estimated from a (relatively long) climatic sample of the predictand; this sample is retrieved from the archives of the National Climatic Data Center. The family of the likelihood functions is estimated from a (relatively short) joint sample of the predictor vector and the predictand; this sample is retrieved from the same archive that the Meteorological Development Laboratory of the National Weather Service utilized to develop the AVN-MOS system. This talk gives a tutorial introduction to the principles and procedures behind the BPO, and highlights some results from the testing: a numerical example of the estimation of the BPO, and a comparative verification of the BPO forecasts and the MOS forecasts. It concludes with a list of demonstrated attributes of the BPO (vis- à-vis the MOS): more parsimonious definitions of predictors, more efficient extraction of predictive information, better representation of the distribution function of predictand, and equal or better performance (in terms of calibration and informativeness).
Characterization of Magma-Driven Hydrothermal Systems at Oceanic Spreading Centers
NASA Astrophysics Data System (ADS)
Farough, A.; Lowell, R. P.; Corrigan, R.
2012-12-01
Fluid circulation in high-temperature hydrothermal systems involves complex water-rock chemical reactions and phase separation. Numerical modeling of reactive transport in multi-component, multiphase systems is required to obtain a full understanding of the characteristics and evolution of hydrothermal vent systems. We use a single-pass parameterized model of high-temperature hydrothermal circulation at oceanic spreading centers constrained by observational parameters such as vent temperature, heat output, and vent field area, together with surface area and depth of the sub-axial magma chamber, to deduce fundamental hydrothermal parameters such as mass flow rate, bulk permeability, conductive boundary layer thickness at the base of the system, magma replenishment rate, and residence time in the discharge zone. All of these key subsurface characteristics are known for fewer than 10 sites out of 300 known hydrothermal systems. The principal limitations of this approach stem from the uncertainty in heat output and vent field area. For systems where data are available on partitioning of heat and chemical output between focused and diffuse flow, we determined the fraction of high-temperature vent fluid incorporated into diffuse flow using a two-limb single pass model. For EPR 9°50` N and ASHES, the diffuse flow temperatures calculated assuming conservative mixing are nearly equal to the observed temperatures indicating that approximately 80%-90% of the hydrothermal heat output occurs as high-temperature flow derived from magmatic heat even though most of the heat output appears as low-temperature diffuse discharge. For the Main Endeavour Field and Lucky Strike, diffuse flow fluids show significant conductive cooling and heating respectively. Finally, we calculate the transport of various geochemical constituents in focused and diffuse flow at the vent field scale and compare the results with estimates of geochemical transports from the Rainbow hydrothermal field where diffuse flow is absent.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Response of a piezoelectric pressure transducer to IR laser beam impingement
NASA Technical Reports Server (NTRS)
Smith, William C.; Leiweke, Robert J.; Beeson, Harold
1992-01-01
The non-pressure response of a PCB Model 113A transducer to a far infrared radiation impulse from a carbon dioxide laser was investigated. Incident radiation was applied both to the bare transducer diaphragm and to coated diaphragms. Coatings included two common ablative materials and a reflective gold coating. High-flux radiation impulses induced an immediate brief negative output followed by a longer-duration positive output. Both timing and amplitude of the responses will be discussed, and the effects of coatings will be compared. Bursts of blackbody radiation from a 1500 K source produced qualitatively similar responses.
NASA Astrophysics Data System (ADS)
Rydberg, Anders
1990-03-01
Second harmonic InP-TED oscillators are investigated for frequencies above 110 GHz using different mounts and TED's. It is found that state of the art output powers, comparable to Schottky-varactor multipliers, of more than 2 mW can be generated above 190 GHz by reducing the capsule parasitics. Output power up to 216 GHz are observed. The tuning range above 110 GHz is found to be more than 40 percent. Using theoretical waveguide models the tuning behavior of the oscillators is also investigated.
Numerical modeling of rapidly varying flows using HEC-RAS and WSPG models.
Rao, Prasada; Hromadka, Theodore V
2016-01-01
The performance of two popular hydraulic models (HEC-RAS and WSPG) for modeling hydraulic jump in an open channel is investigated. The numerical solutions are compared with a new experimental data set obtained for varying channel bottom slopes and flow rates. Both the models satisfactorily predict the flow depths and location of the jump. The end results indicate that the numerical models output is sensitive to the value of chosen roughness coefficient. For this application, WSPG model is easier to implement with few input variables.
Coupling between the lower and middle atmosphere observed during a very severe cyclonic storm 'Madi'
NASA Astrophysics Data System (ADS)
Hima Bindu, H.; Venkat Ratnam, M.; Yesubabu, V.; Narayana Rao, T.; Eswariah, S.; Naidu, C. V.; Vijaya Bhaskara Rao, S.
2018-04-01
Synoptic-scale systems like cyclones can generate broad spectrum of waves, which propagate from its source to the middle atmosphere. Coupling between the lower and middle atmosphere over Tirupati (13.6°N, 79.4°E) is studied during a very severe cyclonic storm 'Madi' (06-13 December 2013) using Weather Research and Forecast (WRF) model assimilated fields and simultaneous meteor radar observations. Since high temporal and spatial measurements are difficult to obtain during these disturbances, WRF model simulations are obtained by assimilating conventional and satellite observations using 3DVAR technique. The obtained outputs are validated for their consistency in predicting cyclone track and vertical structure by comparing them with independent observations. The good agreement between the assimilated outputs and independent observations prompted us to use the model outputs to investigate the gravity waves (GWs) and tides over Tirupati. GWs with the periods 1-5 h are observed with clear downward phase propagation in the lower stratosphere. These upward propagating waves obtained from the model are also noticed in the meteor radar horizontal wind observations in the MLT region (70-110 km). Interestingly, enhancement in the tidal activity in both the zonal and meridional winds in the mesosphere and lower thermosphere (MLT) region is noticed during the peak cyclonic activity except the suppression of semi-diurnal tide in meridional wind. A very good agreement in the tidal activity is also observed in the horizontal winds in the troposphere and lower stratosphere from the WRF model outputs and ERA5. These results thus provide evidence on the vertical coupling of lower and middle atmosphere induced by the tropical cyclone.
Regulated dc-to-dc converter for voltage step-up or step-down with input-output isolation
NASA Technical Reports Server (NTRS)
Feng, S. Y.; Wilson, T. G. (Inventor)
1973-01-01
A closed loop regulated dc-to-dc converter employing an unregulated two winding inductive energy storage converter is provided by using a magnetically coupled multivibrator acting as duty cycle generator to drive the converter. The multivibrator is comprised of two transistor switches and a saturable transformer. The output of the converter is compared with a reference in a comparator which transmits a binary zero until the output exceeds the reference. When the output exceeds the reference, the binary output of the comparator drives transistor switches to turn the multivibrator off. The multivibrator is unbalanced so that a predetermined transistor will always turn on first when the binary feedback signal becomes zero.
Preliminary results and assessment of the MAR outputs over High Mountain Asia
NASA Astrophysics Data System (ADS)
Linares, M.; Tedesco, M.; Margulis, S. A.; Cortés, G.; Fettweis, X.
2017-12-01
Lack of ground measurements has made the use of regional climate models (RCMs) over the High Mountain Asia (HMA) pivotal for understanding the impact of climate change on the hydrological cycle and on the cryosphere. Here, we show an analysis of the assessment of the outputs of Modèle Atmosphérique Régionale (MAR) model RCM over the HMA region as part of the NASA-funded project `Understanding and forecasting changes in High Mountain Asia snow hydrology via a novel Bayesian reanalysis and modeling approach'. The first step was to evaluate the impact of the different forcings on MAR outputs. To this aim, we performed simulations for the 2007 - 2008 and 2014 - 2015 years forcing MAR at its boundaries either with reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF) or from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2). The comparison between the outputs obtained with the two forcings indicates that the impact on MAR simulations depends on specific parameters. For example, in case of surface pressure the maximum percentage error is 0.09 % while the 2-m air temperature has a maximum percentage error of 103.7%. Next, we compared the MAR outputs with reanalysis data fields over the region of interest. In particular, we evaluated the following parameters: surface pressure, snow depth, total cloud cover, two meter temperature, horizontal wind speed, vertical wind speed, wind speed, surface new solar radiation, skin temperature, surface sensible heat flux, and surface latent heat flux. Lastly, we report results concerning the assessment of MAR surface albedo and surface temperature over the region through MODIS remote sensing products. Next steps are to determine whether RCMs and reanalysis datasets are effective at capturing snow and snowmelt runoff processes in the HMA region through a comparison with in situ datasets. This will help determine what refinements are necessary to improve RCM outputs.
Shiba, Kenji; Nukaya, Masayuki; Tsuji, Toshio; Koshiji, Kohji
2008-01-01
This paper reports on the current density and specific absorption rate (SAR) analysis of biological tissue surrounding an air-core transcutaneous transformer for an artificial heart. The electromagnetic field in the biological tissue is analyzed by the transmission line modeling method, and the current density and SAR as a function of frequency, output voltage, output power, and coil dimension are calculated. The biological tissue of the model has three layers including the skin, fat, and muscle. The results of simulation analysis show SARs to be very small at any given transmission conditions, about 2-14 mW/kg, compared to the basic restrictions of the International Commission on nonionizing radiation protection (ICNIRP; 2 W/kg), while the current density divided by the ICNIRP's basic restrictions gets smaller as the frequency rises and the output voltage falls. It is possible to transfer energy below the ICNIRP's basic restrictions when the frequency is over 250 kHz and the output voltage is under 24 V. Also, the parts of the biological tissue that maximized the current density differ by frequencies; in the low frequency is muscle and in the high frequency is skin. The boundary is in the vicinity of the frequency 600-1000 kHz.
Modeling the water isotopes in Greenland precipitation 1959-2001 with the meso-scale model REMO-iso
NASA Astrophysics Data System (ADS)
Sjolte, J.; Hoffmann, G.; Johnsen, S. J.; Vinther, B. M.; Masson-Delmotte, V.; Sturm, C.
2011-09-01
Ice core studies have proved the δ18O in Greenland precipitation to be correlated to the phase of the North Atlantic Oscillation (NAO). This subject has also been investigated in modeling studies. However, these studies have either had severe biases in the δ18O levels, or have not been designed to be compared directly with observations. In this study we nudge a meso-scale climate model fitted with stable water isotope diagnostics (REMO-iso) to follow the actual weather patterns for the period 1959-2001. We evaluate this simulation using meteorological observations from stations along the Greenland coast, and δ18O from several Greenland ice core stacks and Global Network In Precipitation (GNIP) data from Greenland, Iceland and Svalbard. The REMO-iso output explains up to 40% of the interannual δ18O variability observed in ice cores, which is comparable to the model performance for precipitation. In terms of reproducing the observed variability the global model, ECHAM4-iso performs on the same level as REMO-iso. However, REMO-iso has smaller biases in δ18O and improved representation of the observed spatial δ18O-temperature slope compared to ECHAM4-iso. Analysis of the main modes of winter variability of δ18O shows a coherent signal in Central and Western Greenland similar to results from ice cores. The NAO explains 20% of the leading δ18O pattern. Based on the model output we suggest that methods to reconstruct the NAO from Greenland ice cores employ both δ18O and accumulation records.
Comparison of numerical simulations to experiments for atomization in a jet nebulizer.
Lelong, Nicolas; Vecellio, Laurent; Sommer de Gélicourt, Yann; Tanguy, Christian; Diot, Patrice; Junqua-Moullet, Alexandra
2013-01-01
The development of jet nebulizers for medical purposes is an important challenge of aerosol therapy. The performance of a nebulizer is characterized by its output rate of droplets with a diameter under 5 µm. However the optimization of this parameter through experiments has reached a plateau. The purpose of this study is to design a numerical model simulating the nebulization process and to compare it with experimental data. Such a model could provide a better understanding of the atomization process and the parameters influencing the nebulizer output. A model based on the Updraft nebulizer (Hudson) was designed with ANSYS Workbench. Boundary conditions were set with experimental data then transient 3D calculations were run on a 4 µm mesh with ANSYS Fluent. Two air flow rate (2 L/min and 8 L/min, limits of the operating range) were considered to account for different turbulence regimes. Numerical and experimental results were compared according to phenomenology and droplet size. The behavior of the liquid was compared to images acquired through shadowgraphy with a CCD Camera. Three experimental methods, laser diffractometry, phase Doppler anemometry (PDA) and shadowgraphy were used to characterize the droplet size distributions. Camera images showed similar patterns as numerical results. Droplet sizes obtained numerically are overestimated in relation to PDA and diffractometry, which only consider spherical droplets. However, at both flow rates, size distributions extracted from numerical image processing were similar to distributions obtained from shadowgraphy image processing. The simulation then provides a good understanding and prediction of the phenomena involved in the fragmentation of droplets over 10 µm. The laws of dynamics apply to droplets down to 1 µm, so we can assume the continuity of the distribution and extrapolate the results for droplets between 1 and 10 µm. So, this model could help predicting nebulizer output with defined geometrical and physical parameters.
Comparison of Numerical Simulations to Experiments for Atomization in a Jet Nebulizer
Lelong, Nicolas; Vecellio, Laurent; Sommer de Gélicourt, Yann; Tanguy, Christian; Diot, Patrice; Junqua-Moullet, Alexandra
2013-01-01
The development of jet nebulizers for medical purposes is an important challenge of aerosol therapy. The performance of a nebulizer is characterized by its output rate of droplets with a diameter under 5 µm. However the optimization of this parameter through experiments has reached a plateau. The purpose of this study is to design a numerical model simulating the nebulization process and to compare it with experimental data. Such a model could provide a better understanding of the atomization process and the parameters influencing the nebulizer output. A model based on the Updraft nebulizer (Hudson) was designed with ANSYS Workbench. Boundary conditions were set with experimental data then transient 3D calculations were run on a 4 µm mesh with ANSYS Fluent. Two air flow rate (2 L/min and 8 L/min, limits of the operating range) were considered to account for different turbulence regimes. Numerical and experimental results were compared according to phenomenology and droplet size. The behavior of the liquid was compared to images acquired through shadowgraphy with a CCD Camera. Three experimental methods, laser diffractometry, phase Doppler anemometry (PDA) and shadowgraphy were used to characterize the droplet size distributions. Camera images showed similar patterns as numerical results. Droplet sizes obtained numerically are overestimated in relation to PDA and diffractometry, which only consider spherical droplets. However, at both flow rates, size distributions extracted from numerical image processing were similar to distributions obtained from shadowgraphy image processing. The simulation then provides a good understanding and prediction of the phenomena involved in the fragmentation of droplets over 10 µm. The laws of dynamics apply to droplets down to 1 µm, so we can assume the continuity of the distribution and extrapolate the results for droplets between 1 and 10 µm. So, this model could help predicting nebulizer output with defined geometrical and physical parameters. PMID:24244334
Hamdy, M; Hamdan, I
2015-07-01
In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Sun, Shanxia; Delgado, Michael S; Sesmero, Juan P
2016-07-15
Input- and output-based economic policies designed to reduce water pollution from fertilizer runoff by adjusting management practices are theoretically justified and well-understood. Yet, in practice, adjustment in fertilizer application or land allocation may be sluggish. We provide practical guidance for policymakers regarding the relative magnitude and speed of adjustment of input- and output-based policies. Through a dynamic dual model of corn production that takes fertilizer as one of several production inputs, we measure the short- and long-term effects of policies that affect the relative prices of inputs and outputs through the short- and long-term price elasticities of fertilizer application, and also the total time required for different policies to affect fertilizer application through the adjustment rates of capital and land. These estimates allow us to compare input- and output-based policies based on their relative cost-effectiveness. Using data from Indiana and Illinois, we find that input-based policies are more cost-effective than their output-based counterparts in achieving a target reduction in fertilizer application. We show that input- and output-based policies yield adjustment in fertilizer application at the same speed, and that most of the adjustment takes place in the short-term. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vislocky, Robert L.; Fritsch, J. Michael
1997-12-01
A prototype advanced model output statistics (MOS) forecast system that was entered in the 1996-97 National Collegiate Weather Forecast Contest is described and its performance compared to that of widely available objective guidance and to contest participants. The prototype system uses an optimal blend of aviation (AVN) and nested grid model (NGM) MOS forecasts, explicit output from the NGM and Eta guidance, and the latest surface weather observations from the forecast site. The forecasts are totally objective and can be generated quickly on a personal computer. Other "objective" forms of guidance tracked in the contest are 1) the consensus forecast (i.e., the average of the forecasts from all of the human participants), 2) the combination of NGM raw output (for precipitation forecasts) and NGM MOS guidance (for temperature forecasts), and 3) the combination of Eta Model raw output (for precipitation forecasts) and AVN MOS guidance (for temperature forecasts).Results show that the advanced MOS system finished in 20th place out of 737 original entrants, or better than approximately 97% of the human forecasters who entered the contest. Moreover, the advanced MOS system was slightly better than consensus (23d place). The fact that an objective forecast system finished ahead of consensus is a significant accomplishment since consensus is traditionally a very formidable "opponent" in forecast competitions. Equally significant is that the advanced MOS system was superior to the traditional guidance products available from the National Centers for Environmental Prediction (NCEP). Specifically, the combination of NGM raw output and NGM MOS guidance finished in 175th place, and the combination of Eta Model raw output and AVN MOS guidance finished in 266th place. The latter result is most intriguing since the proposed elimination of all NGM products would likely result in a serious degradation of objective products disseminated by NCEP, unless they are replaced with equal or better substitutes. On the other hand, the positive performance of the prototype advanced MOS system shows that it is possible to create a single objective product that is not only superior to currently available objective guidance products, but is also on par with some of the better human forecasters.
System dynamic modeling: an alternative method for budgeting.
Srijariya, Witsanuchai; Riewpaiboon, Arthorn; Chaikledkaew, Usa
2008-03-01
To construct, validate, and simulate a system dynamic financial model and compare it against the conventional method. The study was a cross-sectional analysis of secondary data retrieved from the National Health Security Office (NHSO) in the fiscal year 2004. The sample consisted of all emergency patients who received emergency services outside their registered hospital-catchments area. The dependent variable used was the amount of reimbursed money. Two types of model were constructed, namely, the system dynamic model using the STELLA software and the multiple linear regression model. The outputs of both methods were compared. The study covered 284,716 patients from various levels of providers. The system dynamic model had the capability of producing various types of outputs, for example, financial and graphical analyses. For the regression analysis, statistically significant predictors were composed of service types (outpatient or inpatient), operating procedures, length of stay, illness types (accident or not), hospital characteristics, age, and hospital location (adjusted R(2) = 0.74). The total budget arrived at from using the system dynamic model and regression model was US$12,159,614.38 and US$7,301,217.18, respectively, whereas the actual NHSO reimbursement cost was US$12,840,805.69. The study illustrated that the system dynamic model is a useful financial management tool, although it is not easy to construct. The model is not only more accurate in prediction but is also more capable of analyzing large and complex real-world situations than the conventional method.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
NASA Technical Reports Server (NTRS)
Dreher, Joseph G.
2009-01-01
For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.
NASA Astrophysics Data System (ADS)
Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio
2010-05-01
Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.
Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.
Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard
2011-02-01
The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
On cup anemometer rotor aerodynamics.
Pindado, Santiago; Pérez, Javier; Avila-Sanchez, Sergio
2012-01-01
The influence of anemometer rotor shape parameters, such as the cups' front area or their center rotation radius on the anemometer's performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups' center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor's cup.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
NASA Astrophysics Data System (ADS)
Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng
2018-05-01
Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.
A Solar-luminosity Model and Climate
NASA Technical Reports Server (NTRS)
Perry, Charles A.
1990-01-01
Although the mechanisms of climatic change are not completely understood, the potential causes include changes in the Sun's luminosity. Solar activity in the form of sunspots, flares, proton events, and radiation fluctuations has displayed periodic tendencies. Two types of proxy climatic data that can be related to periodic solar activity are varved geologic formations and freshwater diatom deposits. A model for solar luminosity was developed by using the geometric progression of harmonic cycles that is evident in solar and geophysical data. The model assumes that variation in global energy input is a result of many periods of individual solar-luminosity variations. The 0.1-percent variation of the solar constant measured during the last sunspot cycle provided the basis for determining the amplitude of each luminosity cycle. Model output is a summation of the amplitudes of each cycle of a geometric progression of harmonic sine waves that are referenced to the 11-year average solar cycle. When the last eight cycles in Emiliani's oxygen-18 variations from deep-sea cores were standardized to the average length of glaciations during the Pleistocene (88,000 years), correlation coefficients with the model output ranged from 0.48 to 0.76. In order to calibrate the model to real time, model output was graphically compared to indirect records of glacial advances and retreats during the last 24,000 years and with sea-level rises during the Holocene. Carbon-14 production during the last millenium and elevations of the Great Salt Lake for the last 140 years demonstrate significant correlations with modeled luminosity. Major solar flares during the last 90 years match well with the time-calibrated model.
Climate Expressions in Cellulose Isotopologues Over the Southeast Asian Monsoon Domain
NASA Astrophysics Data System (ADS)
Herzog, M. G.; LeGrande, A. N.; Anchukaitis, K. J.
2013-12-01
Southeast Asia experiences a highly variant climate, strongly influenced by the Southeast Asian monsoon. Oxygen isotopes in the alpha cellulose of tree rings can be used as a proxy measure of climate, but it is not clear which parameter (precipitation, temperature, water vapor, etc) is the most influential. Earlier forward models using observed meteorological data have been successful simulating tree ring cellulose oxygen isotopes in the tropics. However, by creating a cellulose oxygen isotope model which uses input data from GISS ModelE climate runs, we are able to reduce model variability and integrate δ18O in tree ring cellulose over the entire monsoon domain for the past millennium. Simulated timescales of δ18O in cellulose show a consistent annual cycle, allowing confidence in the identification of interdecadal and interannual climate variability. By comparing paleoclimate data with Global Circulation Model (GCM) outputs and a forward tree cellulose δ18O model, this study explores how δ18O can be used as a proxy measure of the monsoon on both local and regional scales. Simulated δ18O in soil water and δ18O in water vapor were found to explain the most variability in the paleoclimate data. Precipitation amount and temperature held little significance. Our results suggest that δ18O in tree cellulose is most influenced by regional controls directly related to cellulose production. top: monthly modeled output for d18O cellulose center: annually averaged model output of d18O cellulose bottom: observed monthly paleoproxy data for d18O cellulose
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
Characteristics of Tropical Cyclones in High-Resolution Models of the Present Climate
NASA Technical Reports Server (NTRS)
Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffery A.; Kim, Daeyhun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Roberts, Malcolm J.;
2014-01-01
The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) in two types of experiments, using a climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.
Characteristics of Tropical Cyclones in High-resolution Models in the Present Climate
NASA Technical Reports Server (NTRS)
Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffrey A.; Kim, Daehyun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Reed, Kevin;
2014-01-01
The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.
Goal Directed Model Inversion: Learning Within Domain Constraints
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Friedland, Peter (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome "would have been right if the outcome had been the desired one." The algorithm makes use of these intermediate "successes" to achieve the final goal. A unique and potentially very important feature of this algorithm is the ability to modify the output of the learning module to force upon it a desired syntactic structure. This differs from ordinary supervised learning in the following way: in supervised learning the exact desired output pattern must be provided. In GDMI instead, it is possible to require simply that the output obey certain rules, i.e., that it "make sense" in some way determined by the knowledge domain. The exact pattern that will achieve the desired outcome is then found by the system. The ability to impose rules while allowing the system to search for its own answers in the context of neural networks is potentially a major breakthrough in two ways: (1) it may allow the construction of networks that can incorporate immediately some important knowledge, i.e., would not need to learn everything from scratch as normally required at present; and (2) learning and searching would be limited to the areas where it is necessary, thus facilitating and speeding up the process. These points are illustrated with examples from robotic path planning and parametric design.
Goal Directed Model Inversion: Learning Within Domain Constraints
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome "would have been right if the outcome had been the desired one." The algorithm makes use of these intermediate "successes" to achieve the final goal. A unique and potentially very important feature of this algorithm is the ability to modify the output of the learning module to force upon it a desired syntactic structure. This differs from ordinary supervised learning in the following way: in supervised learning the exact desired output pattern must be provided. In GDMI instead, it is possible to require simply that the output obey certain rules, i.e., that it "make sense" in some way determined by the knowledge domain. The exact pattern that will achieve the desired outcome is then found by the system. The ability to impose rules while allowing the system to search for its own answers in the context of neural networks is potentially a major breakthrough in two ways: 1) it may allow the construction of networks that can incorporate immediately some important knowledge, i.e. would not need to learn everything from scratch as normally required at present, and 2) learning and searching would be limited to the areas where it is necessary, thus facilitating and speeding up the process. These points are illustrated with examples from robotic path planning and parametric design.
Goal Directed Model Inversion: A Study of Dynamic Behavior
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome 0 "would have been right if the outcome had been the desired one." The algorithm then proceeds as follows: (1) store the action that produced the wrong outcome as a "target" (2) redefine the wrong outcome as a desired goal (3) submit the new desired goal to the system (4) compare the new action with the target action and modify the system by using a suitable algorithm for credit assignment (Back propagation in our example) (5) resubmit the original goal. Prior publications by our group in this area focused on demonstrating empirical results based on the inverse kinematic problem for a simulated robotic arm. In this paper we apply the inversion process to much simpler analytic functions in order to elucidate the dynamic behavior of the system and to determine the sensitivity of the learning process to various parameters. This understanding will be necessary for the acceptance of GDMI as a practical tool.
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Use of medium-range numerical weather prediction model output to produce forecasts of streamflow
Clark, M.P.; Hay, L.E.
2004-01-01
This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases he accuracy of precipitation forecasts over the northeastern United States, but overall, the accuracy of MOS-based precipitation forecasts is slightly lower than the raw NCEP forecasts. Four basins in the United States were chosen as case studies to evaluate the value of MRF output for predictions of streamflow. Streamflow forecasts using MRF output were generated for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado: East Fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). Hydrologic model output forced with measured-station data were used as "truth" to focus attention on the hydrologic effects of errors in the MRF forecasts. Eight-day streamflow forecasts produced using the MOS-corrected MRF output as input (MOS) were compared with those produced using the climatic Ensemble Streamflow Prediction (ESP) technique. MOS-based streamflow forecasts showed increased skill in the snowmelt-dominated river basins, where daily variations in streamflow are strongly forced by temperature. In contrast, the skill of MOS forecasts in the rainfall-dominated basin (the Alapaha River) were equivalent to the skill of the ESP forecasts. Further improvements in streamflow forecasts require more accurate local-scale forecasts of precipitation and temperature, more accurate specification of basin initial conditions, and more accurate model simulations of streamflow. ?? 2004 American Meteorological Society.
The light output and the detection efficiency of the liquid scintillator EJ-309.
Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G
2014-07-01
The light output response and the neutron and gamma-ray detection efficiency are determined for liquid scintillator EJ-309. The light output function is compared to those of previous studies. Experimental efficiency results are compared to predictions from GEANT4, MCNPX and PENELOPE Monte Carlo simulations. The differences associated with the use of different light output functions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Simon, E.; Nowicki, S.; Neumann, T.; Tyahla, L.; Saba, J. L.; Guerber, J. R.; Bonin, J. A.; DiMarzio, J. P.
2017-12-01
The Cryosphere model Comparison tool (CmCt) is a web based ice sheet model validation tool that is being developed by NASA to facilitate direct comparison between observational data and various ice sheet models. The CmCt allows the user to take advantage of several decades worth of observations from Greenland and Antarctica. Currently, the CmCt can be used to compare ice sheet models provided by the user with remotely sensed satellite data from ICESat (Ice, Cloud, and land Elevation Satellite) laser altimetry, GRACE (Gravity Recovery and Climate Experiment) satellite, and radar altimetry (ERS-1, ERS-2, and Envisat). One or more models can be uploaded through the CmCt website and compared with observational data, or compared to each other or other models. The CmCt calculates statistics on the differences between the model and observations, and other quantitative and qualitative metrics, which can be used to evaluate the different model simulations against the observations. The qualitative metrics consist of a range of visual outputs and the quantitative metrics consist of several whole-ice-sheet scalar values that can be used to assign an overall score to a particular simulation. The comparison results from CmCt are useful in quantifying improvements within a specific model (or within a class of models) as a result of differences in model dynamics (e.g., shallow vs. higher-order dynamics approximations), model physics (e.g., representations of ice sheet rheological or basal processes), or model resolution (mesh resolution and/or changes in the spatial resolution of input datasets). The framework and metrics could also be used for use as a model-to-model intercomparison tool, simply by swapping outputs from another model as the observational datasets. Future versions of the tool will include comparisons with other datasets that are of interest to the modeling community, such as ice velocity, ice thickness, and surface mass balance.
Rossa, Carlos; Lehmann, Thomas; Sloboda, Ronald; Usmani, Nawaid; Tavakoli, Mahdi
2017-08-01
Global modelling has traditionally been the approach taken to estimate needle deflection in soft tissue. In this paper, we propose a new method based on local data-driven modelling of needle deflection. External measurement of needle-tissue interactions is collected from several insertions in ex vivo tissue to form a cloud of data. Inputs to the system are the needle insertion depth, axial rotations, and the forces and torques measured at the needle base by a force sensor. When a new insertion is performed, the just-in-time learning method estimates the model outputs given the current inputs to the needle-tissue system and the historical database. The query is compared to every observation in the database and is given weights according to some similarity criteria. Only a subset of historical data that is most relevant to the query is selected and a local linear model is fit to the selected points to estimate the query output. The model outputs the 3D deflection of the needle tip and the needle insertion force. The proposed approach is validated in ex vivo multilayered biological tissue in different needle insertion scenarios. Experimental results in five different case studies indicate an accuracy in predicting needle deflection of 0.81 and 1.24 mm in the horizontal and vertical lanes, respectively, and an accuracy of 0.5 N in predicting the needle insertion force over 216 needle insertions.
NASA Astrophysics Data System (ADS)
Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi
2015-04-01
The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.
NASA Astrophysics Data System (ADS)
Verhoef, Anne; Cook, Peter; Black, Emily; Macdonald, David; Sorensen, James
2017-04-01
This research addresses the terrestrial water balance for West Africa. Emphasis is on the prediction of groundwater recharge and how this may change in the future, which has relevance to the management of surface and groundwater resources. The study was conducted as part of the BRAVE research project, "Building understanding of climate variability into planning of groundwater supplies from low storage aquifers in Africa - Second Phase", funded under the NERC/DFID/ESRC Programme, Unlocking the Potential of Groundwater for the Poor (UPGro). We used model output data of water balance components (precipitation, surface and subsurface run-off, evapotranspiration and soil moisture content) from ERA-Interim/ERA-LAND reanalysis, CMIP5, and high resolution model runs with HadGEM3 (UPSCALE; Mizielinski et al., 2014), for current and future time-periods. Water balance components varied widely between the different models; variation was particularly large for sub-surface runoff (defined as drainage from the bottom-most soil layer of each model). In-situ data for groundwater recharge obtained from the peer-reviewed literature were compared with the model outputs. Separate off-line model sensitivity studies with key land surface models were performed to gain understanding of the reasons behind the model differences. These analyses were centered on vegetation, and soil hydraulic parameters. The modelled current and future recharge time series that had the greatest degree of confidence were used to examine the spatiotemporal variability in groundwater storage. Finally, the implications for water supply planning were assessed. Mizielinski, M.S. et al., 2014. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign. Geoscientific Model Development, 7(4), pp.1629-1640.
Modeling power flow in the induction cavity with a two dimensional circuit simulation
NASA Astrophysics Data System (ADS)
Guo, Fan; Zou, Wenkang; Gong, Boyi; Jiang, Jihao; Chen, Lin; Wang, Meng; Xie, Weiping
2017-02-01
We have proposed a two dimensional (2D) circuit model of induction cavity. The oil elbow and azimuthal transmission line are modeled with one dimensional transmission line elements, while 2D transmission line elements are employed to represent the regions inward the azimuthal transmission line. The voltage waveforms obtained by 2D circuit simulation and transient electromagnetic simulation are compared, which shows satisfactory agreement. The influence of impedance mismatch on the power flow condition in the induction cavity is investigated with this 2D circuit model. The simulation results indicate that the peak value of load voltage approaches the maximum if the azimuthal transmission line roughly matches the pulse forming section. The amplitude of output transmission line voltage is strongly influenced by its impedance, but the peak value of load voltage is insensitive to the actual output transmission line impedance. When the load impedance raises, the voltage across the dummy load increases, and the pulse duration at the oil elbow inlet and insulator stack regions also slightly increase.
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
Control of large flexible structures - An experiment on the NASA Mini-Mast facility
NASA Technical Reports Server (NTRS)
Hsieh, Chen; Kim, Jae H.; Liu, Ketao; Zhu, Guoming; Skelton, Robert E.
1991-01-01
The output variance constraint controller design procedure is integrated with model reduction by modal cost analysis. A procedure is given for tuning MIMO controller designs to find the maximal rms performance of the actual system. Controller designs based on a finite-element model of the system are compared with controller designs based on an identified model (obtained using the Q-Markov Cover algorithm). The identified model and the finite-element model led to similar closed-loop performance, when tested in the Mini-Mast facility at NASA Langley.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
NASA Astrophysics Data System (ADS)
Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha
2018-03-01
Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
NASA Astrophysics Data System (ADS)
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore
2009-01-01
Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-09-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
NASA Astrophysics Data System (ADS)
Maksimov, German A.; Radchenko, Aleksei V.
2006-05-01
Acoustical stimulation (AS) of oil production rate from a well is perspective technology for oil industry but physical mechanisms of acoustical action are not understood clear due to complex character of the phenomena. In practice the role of these mechanisms is appeared non-directly in the form of additional oil output. Thus the validity examination of any physical model has to be carried out as with account of mechanism of acoustic action by itself as well with account of previous and consequent stages dealt with fluid filtration into a well. The advanced model of physical processes taking place at acoustical stimulation is considered in the framework of heating mechanism of acoustical action, but for two-component fluid in porous permeable media. The porous fluid is considered as consisted of light and heavy hydrocarbonaceous phases, which are in a thermodynamic equilibrium. Filtration or acoustical stimulation can change equilibrium balance between phases so the heavy phase can be precipitated on pores walls or dissolved. The set of acoustical, heat and filtration tasks were solved numerically to describe oil output from a well — the final result of acoustical action, which can be compared with experimental data. It is shown that the suggested numerical model allows us to reproduce the basic features of fluid filtration in a well before during and after acoustical stimulation.
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Tunable narrow band difference frequency THz wave generation in DAST via dual seed PPLN OPG.
Dolasinski, Brian; Powers, Peter E; Haus, Joseph W; Cooney, Adam
2015-02-09
We report a widely tunable narrowband terahertz (THz) source via difference frequency generation (DFG). A narrowband THz source uses the output of dual seeded periodically poled lithium niobate (PPLN) optical parametric generators (OPG) combined in the nonlinear crystal 4-dimthylamino-N-methyl-4-stilbazolium-tosylate (DAST). We demonstrate a seamlessly tunable THZ output that tunes from 1.5 THz to 27 THz with a minimum bandwidth of 3.1 GHz. The effects of dispersive phase matching, two-photon absorption, and polarization were examined and compared to a power emission model that consisted of the current accepted parameters of DAST.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Fan, Feng-Ru; Tang, Wei; Yao, Yan; Luo, Jianjun; Zhang, Chi; Wang, Zhong Lin
2014-04-04
Recently, a triboelectric generator (TEG) has been invented to convert mechanical energy into electricity by a conjunction of triboelectrification and electrostatic induction. Compared to the traditional electromagnetic generator (EMG) that produces a high output current but low voltage, the TEG has different output characteristics of low output current but high output voltage. In this paper, we present a comparative study regarding the fundamentals of TEGs and EMGs. The power output performances of the EMG and the TEG have a special complementary relationship, with the EMG being a voltage source and the TEG a current source. Utilizing a power transformed and managed (PTM) system, the current output of a TEG can reach as high as ∼3 mA, which can be coupled with the output signal of an EMG to enhance the output power. We also demonstrate a design to integrate a TEG and an EMG into a single device for simultaneously harvesting mechanical energy. In addition, the integrated NGs can independently output a high voltage and a high current to meet special needs.
Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model
2017-03-01
set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation
A comparison of methods of fitting several models to nutritional response data.
Vedenov, D; Pesti, G M
2008-02-01
A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.
Computer code for off-design performance analysis of radial-inflow turbines with rotor blade sweep
NASA Technical Reports Server (NTRS)
Meitner, P. L.; Glassman, A. J.
1983-01-01
The analysis procedure of an existing computer program was extended to include rotor blade sweep, to model the flow more accurately at the rotor exit, and to provide more detail to the loss model. The modeling changes are described and all analysis equations and procedures are presented. Program input and output are described and are illustrated by an example problem. Results obtained from this program and from a previous program are compared with experimental data.
Mentoring for junior medical faculty: Existing models and suggestions for low-resource settings.
Menon, Vikas; Muraleedharan, Aparna; Bhat, Ballambhattu Vishnu
2016-02-01
Globally, there is increasing recognition about the positive benefits and impact of mentoring on faculty retention rates, career satisfaction and scholarly output. However, emphasis on research and practice of mentoring is comparatively meagre in low and middle income countries. In this commentary, we critically examine two existing models of mentorship for medical faculty and offer few suggestions for an integrated hybrid model that can be adapted for use in low resource settings. Copyright © 2016 Elsevier B.V. All rights reserved.
Simulations of coupled, Antarctic ice-ocean evolution using POP2x and BISICLES (Invited)
NASA Astrophysics Data System (ADS)
Price, S. F.; Asay-Davis, X.; Martin, D. F.; Maltrud, M. E.; Hoffman, M. J.
2013-12-01
We present initial results from Antarctic, ice-ocean coupled simulations using large-scale ocean circulation and land ice evolution models. The ocean model, POP2x is a modified version of POP, a fully eddying, global-scale ocean model (Smith and Gent, 2002). POP2x allows for circulation beneath ice shelf cavities using the method of partial top cells (Losch, 2008). Boundary layer physics, which control fresh water and salt exchange at the ice-ocean interface, are implemented following Holland and Jenkins (1999), Jenkins (1999), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008; Kimura et al., 2013) and with results from other idealized ice-ocean coupling test cases (e.g., Goldberg et al., 2012). The land ice model, BISICLES (Cornford et al., 2012), includes a 1st-order accurate momentum balance (L1L2) and uses block structured, adaptive-mesh refinement to more accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. For idealized test cases focused on marine-ice sheet dynamics, BISICLES output compares very favorably relative to simulations based on the full, nonlinear Stokes momentum balance (MISMIP-3d; Pattyn et al., 2013). Here, we present large-scale (southern ocean) simulations using POP2x with fixed ice shelf geometries, which are used to obtain and validate modeled submarine melt rates against observations. These melt rates are, in turn, used to force evolution of the BISICLES model. An offline-coupling scheme, which we compare with the ice-ocean coupling work of Goldberg et al. (2012), is then used to sequentially update the sub-shelf cavity geometry seen by POP2x.
Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James
2017-02-15
Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.
Surface Water and Energy Budgets for Sub-Saharan Africa in GFDL Coupled Climate Model
NASA Astrophysics Data System (ADS)
Tian, D.; Wood, E. F.; Vecchi, G. A.; Jia, L.; Pan, M.
2015-12-01
This study compare surface water and energy budget variables from the Geophysical Fluid Dynamics Laboratory (GFDL) FLOR models with the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR), Princeton University Global Meteorological Forcing Dataset (PGF), and PGF-driven Variable Infiltration Capacity (VIC) model outputs, as well as available observations over the sub-Saharan Africa. The comparison was made for four configurations of the FLOR models that included FLOR phase 1 (FLOR-p1) and phase 2 (FLOR-p2) and two phases of flux adjusted versions (FLOR-FA-p1 and FLOR-FA-p2). Compared to p1, simulated atmospheric states in p2 were nudged to the Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. The seasonal cycle and annual mean of major surface water (precipitation, evapotranspiration, runoff, and change of storage) and energy variables (sensible heat, ground heat, latent heat, net solar radiation, net longwave radiation, and skin temperature) over a 34-yr period during 1981-2014 were compared in different regions in sub-Saharan Africa (West Africa, East Africa, and Southern Africa). In addition to evaluating the means in three sub-regions, empirical orthogonal functions (EOFs) analyses were conducted to compare both spatial and temporal characteristics of water and energy budget variables from four versions of GFDL FLOR, NCEP CFSR, PGF, and VIC outputs. This presentation will show how well each coupled climate model represented land surface physics and reproduced spatiotemporal characteristics of surface water and energy budget variables. We discuss what caused differences in surface water and energy budgets in land surface components of coupled climate model, climate reanalysis, and reanalysis driven land surface model. The comparisons will reveal whether flux adjustment and nudging would improve depiction of the surface water and energy budgets in coupled climate models.
Modeling laser brightness from cross Porro prism resonators
NASA Astrophysics Data System (ADS)
Forbes, Andrew; Burger, Liesl; Litvin, Igor Anatolievich
2006-08-01
Laser brightness is a parameter often used to compare high power laser beam delivery from various sources, and incorporates both the power contained in the particular mode, as well as the propagation of that mode through the beam quality factor, M2. In this study a cross Porro prism resonator is considered; crossed Porro prism resonators have been known for some time, but until recently have not been modeled as a complete physical optics system that allows the modal output to be determined as a function of the rotation angle of the prisms. In this paper we consider the diffraction losses as a function of the prism rotation angle relative to one another, and combine this with the propagation of the specific modes to determine the laser output brightness as a function of the prism orientation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Jin; Zhang, Yingchen; You, Shutang
Power grid primary frequency response will be significantly impaired by Photovoltaic (PV) penetration increase because of the decrease in inertia and governor response. PV inertia and governor emulation requires reserving PV output and leads to solar energy waste. This paper exploits current grid resources and explores energy storage for primary frequency response under high PV penetration at the interconnection level. Based on the actual models of the U.S. Eastern Interconnection grid and the Texas grid, effects of multiple factors associated with primary frequency response, including the governor ratio, governor deadband, droop rate, fast load response. are assessed under high PVmore » penetration scenarios. In addition, performance of batteries and supercapacitors using different control strategies is studied in the two interconnections. The paper quantifies the potential of various resources to improve interconnection-level primary frequency response under high PV penetration without curtailing solar output.« less
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Characteristics of tropical cyclones in high-resolution models in the present climate
Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; ...
2014-12-05
The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TCmore » frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.« less
Evaluation of a Kinematically-Driven Finite Element Footstrike Model.
Hannah, Iain; Harland, Andy; Price, Dan; Schlarb, Heiko; Lucas, Tim
2016-06-01
A dynamic finite element model of a shod running footstrike was developed and driven with 6 degree of freedom foot segment kinematics determined from a motion capture running trial. Quadratic tetrahedral elements were used to mesh the footwear components with material models determined from appropriate mechanical tests. Model outputs were compared with experimental high-speed video (HSV) footage, vertical ground reaction force (GRF), and center of pressure (COP) excursion to determine whether such an approach is appropriate for the development of athletic footwear. Although unquantified, good visual agreement to the HSV footage was observed but significant discrepancies were found between the model and experimental GRF and COP readings (9% and 61% of model readings outside of the mean experimental reading ± 2 standard deviations, respectively). Model output was also found to be highly sensitive to input kinematics with a 120% increase in maximum GRF observed when translating the force platform 2 mm vertically. While representing an alternative approach to existing dynamic finite element footstrike models, loading highly representative of an experimental trial was not found to be achievable when employing exclusively kinematic boundary conditions. This significantly limits the usefulness of employing such an approach in the footwear development process.
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C
2012-08-01
To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.
The effect of output-input isolation on the scaling and energy consumption of all-spin logic devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jiaxi; Haratipour, Nazila; Koester, Steven J., E-mail: skoester@umn.edu
All-spin logic (ASL) is a novel approach for digital logic applications wherein spin is used as the state variable instead of charge. One of the challenges in realizing a practical ASL system is the need to ensure non-reciprocity, meaning the information flows from input to output, not vice versa. One approach described previously, is to introduce an asymmetric ground contact, and while this approach was shown to be effective, it remains unclear as to the optimal approach for achieving non-reciprocity in ASL. In this study, we quantitatively analyze techniques to achieve non-reciprocity in ASL devices, and we specifically compare themore » effect of using asymmetric ground position and dipole-coupled output/input isolation. For this analysis, we simulate the switching dynamics of multiple-stage logic devices with FePt and FePd perpendicular magnetic anisotropy materials using a combination of a matrix-based spin circuit model coupled to the Landau–Lifshitz–Gilbert equation. The dipole field is included in this model and can act as both a desirable means of coupling magnets and a source of noise. The dynamic energy consumption has been calculated for these schemes, as a function of input/output magnet separation, and the results show that using a scheme that electrically isolates logic stages produces superior non-reciprocity, thus allowing both improved scaling and reduced energy consumption.« less
[Air pollution in an urban area nearby the Rome-Ciampino city airport].
Di Menno di Bucchianico, Alessandro; Cattani, Giorgio; Gaeta, Alessandra; Caricchia, Anna Maria; Troiano, Francesco; Sozzi, Roberto; Bolignano, Andrea; Sacco, Fabrizio; Damizia, Sesto; Barberini, Silvia; Caleprico, Roberta; Fabozzi, Tina; Ancona, Carla; Ancona, Laura; Cesaroni, Giulia; Forastiere, Francesco; Gobbi, Gian Paolo; Costabile, Francesca; Angelini, Federico; Barnaba, Francesca; Inglessis, Marco; Tancredi, Francesco; Palumbo, Lorenzo; Fontana, Luca; Bergamaschi, Antonio; Iavicoli, Ivo
2014-01-01
to assess air pollution spatial and temporal variability in the urban area nearby the Ciampino International Airport (Rome) and to investigate the airport-related emissions contribute. the study domain was a 64 km2 area around the airport. Two fifteen-day monitoring campaigns (late spring, winter) were carried out. Results were evaluated using several runs outputs of an airport-related sources Lagrangian particle model and a photochemical model (the Flexible Air quality Regional Model, FARM). both standard and high time resolution air pollutant concentrations measurements: CO, NO, NO2, C6H6, mass and number concentration of several PM fractions. 46 fixed points (spread over the study area) of NO2 and volatile organic compounds concentrations (fifteen days averages); deterministic models outputs. standard time resolution measurements, as well as model outputs, showed the airport contribution to air pollution levels being little compared to the main source in the area (i.e. vehicular traffic). However, using high time resolution measurements, peaks of particles associated with aircraft takeoff (total number concentration and soot mass concentration), and landing (coarse mass concentration) were observed, when the site measurement was downwind to the runway. the frequently observed transient spikes associated with aircraft movements could lead to a not negligible contribute to ultrafine, soot and coarse particles exposure of people living around the airport. Such contribute and its spatial and temporal variability should be investigated when assessing the airports air quality impact.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
NASA Astrophysics Data System (ADS)
Ma, X.; Yoshikane, T.; Hara, M.; Adachi, S. A.; Wakazuki, Y.; Kawase, H.; Kimura, F.
2014-12-01
To check the influence of boundary input data on a modeling result, we had a numerical investigation of river discharge by using runoff data derived by a regional climate model with a 4.5-km resolution as input data to a hydrological model. A hindcast experiment, which to reproduce the current climate was carried out for the two decades, 1980s and 1990s. We used the Advanced Research WRF (ARW) (ver. 3.2.1) with a two-way nesting technique and the WRF single-moment 6-class microphysics scheme. Noah-LSM is adopted to simulate the land surface process. The NCEP/NCAR and ERA-Interim 6-hourly reanalysis datasets were used as the lateral boundary condition for the runs, respectively. The output variables used for river discharge simulation from the WRF model were underground runoff and surface runoff. Four rivers (Mogami, Agano, Jinzu and Tone) were selected in this study. The results showed that the characteristic of river discharge in seasonal variation could be represented and there were overestimated compared with measured one.
Efficient Provision of Employment Service Outputs: A Production Frontier Analysis.
ERIC Educational Resources Information Center
Cavin, Edward S.; Stafford, Frank P.
1985-01-01
This article develops a production frontier model for the Employment Service and assesses the relative efficiency of the 51 State Employment Security Agencies in attaining program outcomes close to that frontier. This approach stands in contrast to such established practices as comparing programs to their own previous performance. (Author/CT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Taraphdar, Sourav; Wang, Taiping
This paper presents a modeling study conducted to evaluate the uncertainty of a regional model in simulating hurricane wind and pressure fields, and the feasibility of driving coastal storm surge simulation using an ensemble of region model outputs produced by 18 combinations of three convection schemes and six microphysics parameterizations, using Hurricane Katrina as a test case. Simulated wind and pressure fields were compared to observed H*Wind data for Hurricane Katrina and simulated storm surge was compared to observed high-water marks on the northern coast of the Gulf of Mexico. The ensemble modeling analysis demonstrated that the regional model wasmore » able to reproduce the characteristics of Hurricane Katrina with reasonable accuracy and can be used to drive the coastal ocean model for simulating coastal storm surge. Results indicated that the regional model is sensitive to both convection and microphysics parameterizations that simulate moist processes closely linked to the tropical cyclone dynamics that influence hurricane development and intensification. The Zhang and McFarlane (ZM) convection scheme and the Lim and Hong (WDM6) microphysics parameterization are the most skillful in simulating Hurricane Katrina maximum wind speed and central pressure, among the three convection and the six microphysics parameterizations. Error statistics of simulated maximum water levels were calculated for a baseline simulation with H*Wind forcing and the 18 ensemble simulations driven by the regional model outputs. The storm surge model produced the overall best results in simulating the maximum water levels using wind and pressure fields generated with the ZM convection scheme and the WDM6 microphysics parameterization.« less
Impact of climate change on global malaria distribution.
Caminade, Cyril; Kovats, Sari; Rocklov, Joacim; Tompkins, Adrian M; Morse, Andrew P; Colón-González, Felipe J; Stenlund, Hans; Martens, Pim; Lloyd, Simon J
2014-03-04
Malaria is an important disease that has a global distribution and significant health burden. The spatial limits of its distribution and seasonal activity are sensitive to climate factors, as well as the local capacity to control the disease. Malaria is also one of the few health outcomes that has been modeled by more than one research group and can therefore facilitate the first model intercomparison for health impacts under a future with climate change. We used bias-corrected temperature and rainfall simulations from the Coupled Model Intercomparison Project Phase 5 climate models to compare the metrics of five statistical and dynamical malaria impact models for three future time periods (2030s, 2050s, and 2080s). We evaluated three malaria outcome metrics at global and regional levels: climate suitability, additional population at risk and additional person-months at risk across the model outputs. The malaria projections were based on five different global climate models, each run under four emission scenarios (Representative Concentration Pathways, RCPs) and a single population projection. We also investigated the modeling uncertainty associated with future projections of populations at risk for malaria owing to climate change. Our findings show an overall global net increase in climate suitability and a net increase in the population at risk, but with large uncertainties. The model outputs indicate a net increase in the annual person-months at risk when comparing from RCP2.6 to RCP8.5 from the 2050s to the 2080s. The malaria outcome metrics were highly sensitive to the choice of malaria impact model, especially over the epidemic fringes of the malaria distribution.
Impact of climate change on global malaria distribution
Caminade, Cyril; Kovats, Sari; Rocklov, Joacim; Tompkins, Adrian M.; Morse, Andrew P.; Colón-González, Felipe J.; Stenlund, Hans; Martens, Pim; Lloyd, Simon J.
2014-01-01
Malaria is an important disease that has a global distribution and significant health burden. The spatial limits of its distribution and seasonal activity are sensitive to climate factors, as well as the local capacity to control the disease. Malaria is also one of the few health outcomes that has been modeled by more than one research group and can therefore facilitate the first model intercomparison for health impacts under a future with climate change. We used bias-corrected temperature and rainfall simulations from the Coupled Model Intercomparison Project Phase 5 climate models to compare the metrics of five statistical and dynamical malaria impact models for three future time periods (2030s, 2050s, and 2080s). We evaluated three malaria outcome metrics at global and regional levels: climate suitability, additional population at risk and additional person-months at risk across the model outputs. The malaria projections were based on five different global climate models, each run under four emission scenarios (Representative Concentration Pathways, RCPs) and a single population projection. We also investigated the modeling uncertainty associated with future projections of populations at risk for malaria owing to climate change. Our findings show an overall global net increase in climate suitability and a net increase in the population at risk, but with large uncertainties. The model outputs indicate a net increase in the annual person-months at risk when comparing from RCP2.6 to RCP8.5 from the 2050s to the 2080s. The malaria outcome metrics were highly sensitive to the choice of malaria impact model, especially over the epidemic fringes of the malaria distribution. PMID:24596427
To publish or not to publish? On the aggregation and drivers of research performance
De Witte, Kristof
2010-01-01
This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573
Qu, Jianjun; Sun, Fengyan; Zhao, Chunsheng
2006-12-01
A new visco-elastic contact model of traveling wave ultrasonic motor (TWUSM) is proposed. In this model, the rotor is assumed to be rigid body and the friction material on stator teeth surface to be visco-elastic body. Both load characteristics of TWUSM, such as rotation speed, torque and efficiency, and effects of interface parameters between stator and rotor on output characteristic of TWUSM can be calculated and simulated numerically by using MATLAB method based on this model. This model is compared with that one of compliant slider and rigid stator. The results show that this model can obtain bigger stall torque. The simulated results are compared with test results, and found that their load characteristics have good agreement.
GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output
2002-03-26
and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and
NASA Astrophysics Data System (ADS)
Yang, C.-H.; Itoh, K.; Tomita, H.; Obara, M.
1995-07-01
Theoretical analysis of the output performance of a transverse discharge pumped neon Penning laser (585.3 nm) using a mixture of Ne/H2 is described. The validity of the kinetic model is confirmed by comparing the results to the experimental discharge and laser performance. It is theoretically shown that the optimum mixing ratio of the Ne/H2 mixture is 1:2.5, and the optimum operating pressure is about 56 Torr. The model also predicts that the intrinsic efficiency reaches a peak of 8.5×10-6 at an excitation rate of 0.5 MW/cm3 under the optimum mixing ratio and operating pressure conditions. At excitation rates in excess of 0.5 MW/cm3 the laser output power is slowly increasing and then saturates due to electron collisional quenching of the upper laser level. The laser power extraction is increased by laser injection seeding in order to rapidly build up the lasing. The improved intrinsic efficiency is about two times higher than without the injection seeding. The improved specific laser output is 8 W/cm3, therefore, a discharge volume of 125 cm3 will be able to generate the peak laser power reaching 1 kW. This power value is sufficient to obtain the same treatment effect as the gold vapor laser used in photodynamic therapy. Moreover, by fitting this model to the experimental results of the laser output energy with a Ne/D2 mixture, it is shown that the Penning ionization rate constant of H2 is larger than that of D2.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.
Designing ecological climate change impact assessments to reflect key climatic drivers
Sofaer, Helen R.; Barsugli, Joseph J.; Jarnevich, Catherine S.; Abatzoglou, John T.; Talbert, Marian; Miller, Brian W.; Morisette, Jeffrey T.
2017-01-01
Identifying the climatic drivers of an ecological system is a key step in assessing its vulnerability to climate change. The climatic dimensions to which a species or system is most sensitive – such as means or extremes – can guide methodological decisions for projections of ecological impacts and vulnerabilities. However, scientific workflows for combining climate projections with ecological models have received little explicit attention. We review Global Climate Model (GCM) performance along different dimensions of change and compare frameworks for integrating GCM output into ecological models. In systems sensitive to climatological means, it is straightforward to base ecological impact assessments on mean projected changes from several GCMs. Ecological systems sensitive to climatic extremes may benefit from what we term the ‘model space’ approach: a comparison of ecological projections based on simulated climate from historical and future time periods. This approach leverages the experimental framework used in climate modeling, in which historical climate simulations serve as controls for future projections. Moreover, it can capture projected changes in the intensity and frequency of climatic extremes, rather than assuming that future means will determine future extremes. Given the recent emphasis on the ecological impacts of climatic extremes, the strategies we describe will be applicable across species and systems. We also highlight practical considerations for the selection of climate models and data products, emphasizing that the spatial resolution of the climate change signal is generally coarser than the grid cell size of downscaled climate model output. Our review illustrates how an understanding of how climate model outputs are derived and downscaled can improve the selection and application of climatic data used in ecological modeling.
Designing ecological climate change impact assessments to reflect key climatic drivers.
Sofaer, Helen R; Barsugli, Joseph J; Jarnevich, Catherine S; Abatzoglou, John T; Talbert, Marian K; Miller, Brian W; Morisette, Jeffrey T
2017-07-01
Identifying the climatic drivers of an ecological system is a key step in assessing its vulnerability to climate change. The climatic dimensions to which a species or system is most sensitive - such as means or extremes - can guide methodological decisions for projections of ecological impacts and vulnerabilities. However, scientific workflows for combining climate projections with ecological models have received little explicit attention. We review Global Climate Model (GCM) performance along different dimensions of change and compare frameworks for integrating GCM output into ecological models. In systems sensitive to climatological means, it is straightforward to base ecological impact assessments on mean projected changes from several GCMs. Ecological systems sensitive to climatic extremes may benefit from what we term the 'model space' approach: a comparison of ecological projections based on simulated climate from historical and future time periods. This approach leverages the experimental framework used in climate modeling, in which historical climate simulations serve as controls for future projections. Moreover, it can capture projected changes in the intensity and frequency of climatic extremes, rather than assuming that future means will determine future extremes. Given the recent emphasis on the ecological impacts of climatic extremes, the strategies we describe will be applicable across species and systems. We also highlight practical considerations for the selection of climate models and data products, emphasizing that the spatial resolution of the climate change signal is generally coarser than the grid cell size of downscaled climate model output. Our review illustrates how an understanding of how climate model outputs are derived and downscaled can improve the selection and application of climatic data used in ecological modeling. © 2017 John Wiley & Sons Ltd.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
TUTORIAL: Validating biorobotic models
NASA Astrophysics Data System (ADS)
Webb, Barbara
2006-09-01
Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .
System, method and apparatus for conducting a keyterm search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
System, method and apparatus for conducting a phrase search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, G; Cao, F; Szpala, S
2016-06-15
Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Design of vaccination and fumigation on Host-Vector Model by input-output linearization method
NASA Astrophysics Data System (ADS)
Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning
2017-03-01
Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.
Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition
2006-04-05
classes. In this study we evaluate the effectiveness of one of these formulations that was developed by Crammer and Singer [9], which leads to...significantly more complex model can be learned by directly applying the Crammer -Singer multiclass formulation on the outputs of the binary classifiers...will refer to this as the Crammer -Singer (CS) model. Comparing the scaling approach to the Crammer -Singer approach we can see that the Crammer -Singer
Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.
Fintelman, D M; Sterling, M; Hemida, H; Li, F-X
2014-06-03
The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.
A comparative verification of high resolution precipitation forecasts using model output statistics
NASA Astrophysics Data System (ADS)
van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees
2017-04-01
Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.
Large-Signal Klystron Simulations Using KLSC
NASA Astrophysics Data System (ADS)
Carlsten, B. E.; Ferguson, P.
1997-05-01
We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.
Modeling of static and flowing-gas diode pumped alkali lasers
NASA Astrophysics Data System (ADS)
Barmashenko, Boris D.; Auslender, Ilya; Yacoby, Eyal; Waichman, Karol; Sadot, Oren; Rosenwaks, Salman
2016-03-01
Modeling of static and flowing-gas subsonic, transonic and supersonic Cs and K Ti:Sapphire and diode pumped alkali lasers (DPALs) is reported. A simple optical model applied to the static K and Cs lasers shows good agreement between the calculated and measured dependence of the laser power on the incident pump power. The model reproduces the observed threshold pump power in K DPAL which is much higher than that predicted by standard models of the DPAL. Scaling up flowing-gas DPALs to megawatt class power is studied using accurate three-dimensional computational fluid dynamics model, taking into account the effects of temperature rise and losses of alkali atoms due to ionization. Both the maximum achievable power and laser beam quality are estimated for Cs and K lasers. The performance of subsonic and, in particular, supersonic DPALs is compared with that of transonic, where supersonic nozzle and diffuser are spared and high power mechanical pump (needed for recovery of the gas total pressure which strongly drops in the diffuser), is not required for continuous closed cycle operation. For pumping by beams of the same rectangular cross section, comparison between end-pumping and transverse-pumping shows that the output power is not affected by the pump geometry, however, the intensity of the output laser beam in the case of transverse-pumped DPALs is strongly non-uniform in the laser beam cross section resulting in higher brightness and better beam quality in the far field for the end-pumping geometry where the intensity of the output beam is uniform.
Fish schooling as a basis for vertical axis wind turbine farm design.
Whittlesey, Robert W; Liska, Sebastian; Dabiri, John O
2010-09-01
Most wind farms consist of horizontal axis wind turbines (HAWTs) due to the high power coefficient (mechanical power output divided by the power of the free-stream air through the turbine cross-sectional area) of an isolated turbine. However when in close proximity to neighboring turbines, HAWTs suffer from a reduced power coefficient. In contrast, previous research on vertical axis wind turbines (VAWTs) suggests that closely spaced VAWTs may experience only small decreases (or even increases) in an individual turbine's power coefficient when placed in close proximity to neighbors, thus yielding much higher power outputs for a given area of land. A potential flow model of inter-VAWT interactions is developed to investigate the effect of changes in VAWT spatial arrangement on the array performance coefficient, which compares the expected average power coefficient of turbines in an array to a spatially isolated turbine. A geometric arrangement based on the configuration of shed vortices in the wake of schooling fish is shown to significantly increase the array performance coefficient based upon an array of 16 x 16 wind turbines. The results suggest increases in power output of over one order of magnitude for a given area of land as compared to HAWTs.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
A transition-based joint model for disease named entity recognition and normalization.
Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong
2017-08-01
Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Integrated Mecical Model (IMM) 4.0 Verification and Validation (VV) Testing (HRP IWS 2016)
NASA Technical Reports Server (NTRS)
Walton, M; Kerstman, E.; Arellano, J.; Boley, L.; Reyes, D.; Young, M.; Garcia, Y.; Saile, L.; Myers, J.
2016-01-01
Timeline, partial treatment, and alternate medications were added to the IMM to improve the fidelity of this model to enhance decision support capabilities. Using standard design reference missions, IMM VV testing compared outputs from the current operational IMM (v3) with those from the model with added functionalities (v4). These new capabilities were examined in a comparative, stepwise approach as follows: a) comparison of the current operational IMM v3 with the enhanced functionality of timeline alone (IMM 4.T), b) comparison of IMM 4.T with the timeline and partial treatment (IMM 4.TPT), and c) comparison of IMM 4.TPT with timeline, partial treatment and alternative medication (IMM 4.0).
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
NASA Astrophysics Data System (ADS)
Subekti, R. M.; Suroso, D. S. A.
2018-05-01
Calculation of environmental carrying capacity can be done by various approaches. The selection of an appropriate approach determines the success of determining and applying environmental carrying capacity. This study aimed to compare the ecological footprint approach and the ecosystem services approach for calculating environmental carrying capacity. It attempts to describe two relatively new models that require further explanation if they are used to calculate environmental carrying capacity. In their application, attention needs to be paid to their respective advantages and weaknesses. Conceptually, the ecological footprint model is more complete than the ecosystem services model, because it describes the supply and demand of resources, including supportive and assimilative capacity of the environment, and measurable output through a resource consumption threshold. However, this model also has weaknesses, such as not considering technological change and resources beneath the earth’s surface, as well as the requirement to provide trade data between regions for calculating at provincial and district level. The ecosystem services model also has advantages, such as being in line with strategic environmental assessment (SEA) of ecosystem services, using spatial analysis based on ecoregions, and a draft regulation on calculation guidelines formulated by the government. Meanwhile, weaknesses are that it only describes the supply of resources, that the assessment of the different types of ecosystem services by experts tends to be subjective, and that the output of the calculation lacks a resource consumption threshold.
ISODA, Norikazu; ASANO, Akihiro; ICHIJO, Michiru; WAKAMORI, Shiho; OHNO, Hiroshi; SATO, Kazuhiko; OKAMOTO, Hirokazu; NAKAO, Shigeru; KATO, Hajime; SAITO, Kazuma; ITO, Naoki; USUI, Akira; TAKAYAMA, Hiroaki; SAKODA, Yoshihiro
2017-01-01
A scenario tree model was developed to propose efficient bovine viral diarrhea (BVD) control measures. The model used field data in eastern Hokkaido where the risk of BVDV infection in cattle has been reduced by an eradication program including mass vaccination, individual tests prior to communal pasture grazing, herd screening tests using bulk milk, and outbreak investigations of newly infected herds. These four activities were then used as hypothesized control measures in the simulation. In each simulation, the numbers of cattle infected persistently and transiently with BVDV detected by clinical manifestations and diagnosis tests and of missed by all of the diagnosis tests were calculated, and the numbers were used as indicators to be compared for the efficacy of the control measures. The model outputs indicated that the adoption of mass vaccination decreased the number of missed BVD cattle, although it did not increase the number of detected BVD cattle. Under implementation of mass vaccination, the efficacy of individual tests on selected 20% of the young and adult cattle was equal to that of the herd screening test performed in all the herds. When the virus prevalence or the number of sensitive animals becomes low, the efficacy of herd screening test was superior to one of individual tests. Considering the model outputs together, the scenario tree model developed in the present study was useful to compare the efficacy of the control measures for BVD. PMID:28539533
Isoda, Norikazu; Asano, Akihiro; Ichijo, Michiru; Wakamori, Shiho; Ohno, Hiroshi; Sato, Kazuhiko; Okamoto, Hirokazu; Nakao, Shigeru; Kato, Hajime; Saito, Kazuma; Ito, Naoki; Usui, Akira; Takayama, Hiroaki; Sakoda, Yoshihiro
2017-07-07
A scenario tree model was developed to propose efficient bovine viral diarrhea (BVD) control measures. The model used field data in eastern Hokkaido where the risk of BVDV infection in cattle has been reduced by an eradication program including mass vaccination, individual tests prior to communal pasture grazing, herd screening tests using bulk milk, and outbreak investigations of newly infected herds. These four activities were then used as hypothesized control measures in the simulation. In each simulation, the numbers of cattle infected persistently and transiently with BVDV detected by clinical manifestations and diagnosis tests and of missed by all of the diagnosis tests were calculated, and the numbers were used as indicators to be compared for the efficacy of the control measures. The model outputs indicated that the adoption of mass vaccination decreased the number of missed BVD cattle, although it did not increase the number of detected BVD cattle. Under implementation of mass vaccination, the efficacy of individual tests on selected 20% of the young and adult cattle was equal to that of the herd screening test performed in all the herds. When the virus prevalence or the number of sensitive animals becomes low, the efficacy of herd screening test was superior to one of individual tests. Considering the model outputs together, the scenario tree model developed in the present study was useful to compare the efficacy of the control measures for BVD.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E
2017-08-01
Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society for Leukocyte Biology.
Henne, Erik; Kesten, Steven; Herth, Felix J F
2013-01-01
A method of achieving endoscopic lung volume reduction for emphysema has been developed that utilizes precise amounts of thermal energy in the form of water vapor to ablate lung tissue. This study evaluates the energy output and implications of the commercial InterVapor system and compares it to the clinical trial system. Two methods of evaluating the energy output of the vapor systems were used, a direct energy measurement and a quantification of resultant thermal profile in a lung model. Direct measurement of total energy and the component attributable to gas (vapor energy) was performed by condensing vapor in a water bath and measuring the temperature and mass changes. Infrared images of a lung model were taken after vapor delivery. The images were quantified to characterize the thermal profile. The total energy and vapor energy of the InterVapor system was measured at various dose levels and compared to the clinical trial system at a dose of 10.0 cal/g. An InterVapor dose of 8.5 cal/g was found to have the most similar vapor energy output with the smallest associated reduction in total energy. This was supported by characterization of the thermal profile in the lung model that demonstrated the profile of InterVapor at 8.5 cal/g to not exceed the profile of the clinical trial system. Considering both total energy and vapor energy is important during the development of clinical vapor applications. For InterVapor, a closer study of both energy types justified a reduced target vapor-dosing range for lung volume reduction. The clinical implication is a potential improvement for benefiting the risk profile. Copyright © 2013 S. Karger AG, Basel.
SU-F-T-479: Estimation of the Accuracy in Respiratory-Gated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosawa, T; Miyakawa, S; Sato, M
Purpose: Irregular respiratory patterns affects dose outputs in respiratorygated radiotherapy and there is no commercially available quality assurance (QA) system for it. We designed and developed a patient specific QA system for respiratory-gated radiotherapy to estimate irradiated output. Methods: Our in-house QA system for gating was composed of a personal computer with the USB-FSIO electronic circuit connecting to the linear accelerator (ONCOR-K, Toshiba Medical Systems). The linac implements a respiratory gating system (AZ-733V, Anzai Medical). During the beam was on, 4.2 V square-wave pulses were continually sent to the system. Our system can receive and count the pulses. At first,more » our system and an oscilloscope were compared to check the performance of our system. Next, basic estimation models were generated when ionization-chamber measurements were performed in gating using regular sinusoidal wave patterns (2.0, 2.5, 4.0, 8.0, 15 sec/cycle). During gated irradiation with the regular patterns, the number of the pulses per one gating window was measured using our system. Correlation between the number of the pulses per one gating and dose per the gating window were assessed to generate the estimation model. Finally, two irregular respiratory patterns were created and the accuracy of the estimation was evaluated. Results: Compared to the oscilloscope, our system worked similarly. The basic models were generated with the accuracy within 0.1%. The results of the gated irradiations with two irregular respiratory patterns show good agreement within 0.4% estimation accuracy. Conclusion: Our developed system shows good estimation for even irregular respiration patterns. The system would be a useful tool to verify the output for respiratory-gated radiotherapy.« less
Integrating predictive information into an agro-economic model to guide agricultural management
NASA Astrophysics Data System (ADS)
Zhang, Y.; Block, P.
2016-12-01
Skillful season-ahead climate predictions linked with responsive agricultural planning and management have the potential to reduce losses, if adopted by farmers, particularly for rainfed-dominated agriculture such as in Ethiopia. Precipitation predictions during the growing season in major agricultural regions of Ethiopia are used to generate predicted climate yield factors, which reflect the influence of precipitation amounts on crop yields and serve as inputs into an agro-economic model. The adapted model, originally developed by the International Food Policy Research Institute, produces outputs of economic indices (GDP, poverty rates, etc.) at zonal and national levels. Forecast-based approaches, in which farmers' actions are in response to forecasted conditions, are compared with no-forecast approaches in which farmers follow business as usual practices, expecting "average" climate conditions. The effects of farmer adoption rates, including the potential for reduced uptake due to poor predictions, and increasing forecast lead-time on economic outputs are also explored. Preliminary results indicate superior gains under forecast-based approaches.
Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation
NASA Technical Reports Server (NTRS)
Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.
2010-01-01
Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.
Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics
NASA Astrophysics Data System (ADS)
Lazarus, S. M.; Holman, B. P.; Splitt, M. E.
2017-12-01
A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models
Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam
2016-01-01
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936
Sassani, Farrokh
2014-01-01
The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063
Lim, Einly; Salamonsen, Robert Francis; Mansouri, Mahdi; Gaddum, Nicholas; Mason, David Glen; Timms, Daniel L; Stevens, Michael Charles; Fraser, John; Akmeliawati, Rini; Lovell, Nigel Hamilton
2015-02-01
The present study investigates the response of implantable rotary blood pump (IRBP)-assisted patients to exercise and head-up tilt (HUT), as well as the effect of alterations in the model parameter values on this response, using validated numerical models. Furthermore, we comparatively evaluate the performance of a number of previously proposed physiologically responsive controllers, including constant speed, constant flow pulsatility index (PI), constant average pressure difference between the aorta and the left atrium, constant average differential pump pressure, constant ratio between mean pump flow and pump flow pulsatility (ratioP I or linear Starling-like control), as well as constant left atrial pressure ( P l a ¯ ) control, with regard to their ability to increase cardiac output during exercise while maintaining circulatory stability upon HUT. Although native cardiac output increases automatically during exercise, increasing pump speed was able to further improve total cardiac output and reduce elevated filling pressures. At the same time, reduced venous return associated with upright posture was not shown to induce left ventricular (LV) suction. Although P l a ¯ control outperformed other control modes in its ability to increase cardiac output during exercise, it caused a fall in the mean arterial pressure upon HUT, which may cause postural hypotension or patient discomfort. To the contrary, maintaining constant average pressure difference between the aorta and the left atrium demonstrated superior performance in both exercise and HUT scenarios. Due to their strong dependence on the pump operating point, PI and ratioPI control performed poorly during exercise and HUT. Our simulation results also highlighted the importance of the baroreflex mechanism in determining the response of the IRBP-assisted patients to exercise and postural changes, where desensitized reflex response attenuated the percentage increase in cardiac output during exercise and substantially reduced the arterial pressure upon HUT. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Originally developed in 1999, an updated version 8.8.0 with bug fixes was released on September 30th, 2017. EnergyPlus™ is a whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption—for heating, cooling, ventilation, lighting and plug and process loads—and water use in buildings. EnergyPlus is a console-based program that reads input and writes output to text files. It ships with a number of utilities including IDF-Editor for creating input files using a simple spreadsheet-like interface, EP-Launch for managing input and output files and performing batch simulations, and EP-Compare for graphically comparing the results ofmore » two or more simulations. Several comprehensive graphical interfaces for EnergyPlus are also available. DOE does most of its work with EnergyPlus using the OpenStudio® software development kit and suite of applications. DOE releases major updates to EnergyPlus twice annually.« less
Pulse-shape discrimination between electron and nuclear recoils in a NaI(Tl) crystal
NASA Astrophysics Data System (ADS)
Lee, H. S.; Adhikari, G.; Adhikari, P.; Choi, S.; Hahn, I. S.; Jeon, E. J.; Joo, H. W.; Kang, W. G.; Kim, G. B.; Kim, H. J.; Kim, H. O.; Kim, K. W.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Lee, J. H.; Lee, M. H.; Leonard, D. S.; Li, J.; Oh, S. Y.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, K. S.; Shim, J. H.; So, J. H.
2015-08-01
We report on the response of a high light-output NaI(Tl) crystal to nuclear recoils induced by neutrons from an Am-Be source and compare the results with the response to electron recoils produced by Compton-scattered 662 keV γ-rays from a 137Cs source. The measured pulse-shape discrimination (PSD) power of the NaI(Tl) crystal is found to be significantly improved because of the high light output of the NaI(Tl) detector. We quantify the PSD power with a quality factor and estimate the sensitivity to the interaction rate for weakly interacting massive particles (WIMPs) with nucleons, and the result is compared with the annual modulation amplitude observed by the DAMA/LIBRA experiment. The sensitivity to spin-independent WIMP-nucleon interactions based on 100 kg·year of data from NaI detectors is estimated with simulated experiments, using the standard halo model.
Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.
Zhang, Xuyang; Goh, Kean S
2015-11-01
Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Astrophysics Data System (ADS)
Krkošková, Katarína; Papán, Daniel; Papánová, Zuzana
2017-10-01
The technical seismicity negatively affects the environment, buildings and structures. Technical seismicity means seismic shakes caused by force impulse, random process and unnatural origin. The vibration influence on buildings is evaluated in the Eurocode 8 in Slovak Republic, however, the Slovak Technical Standard STN 73 0036 includes solution of the technical seismicity. This standard also classes bridges into the group of structures that are significant in light of the technical seismicity - the group “U”. Using the case studies analysis by FEM simulation and comparison is necessary because of brief norm evaluation of this issue. In this article, determinate dynamic parameters by experimental measuring and numerical method on two real bridges are compared. First bridge, (D201 - 00) is Scaffold Bridge on the road I/11 leading to the city of Čadca and is situated in the city of Žilina. It is eleven - span concrete road bridge. The railway is the obstacle, which this bridge spans. Second bridge (M5973 Brodno) is situated in the part of Žilina City on the road of I/11. It is concrete three - span road bridge built as box girder. The computing part includes 3D computational models of the bridges. First bridge (D201 - 00) was modelled in the software of IDA Nexis as the slab - wall model. The model outputs are natural frequencies and natural vibration modes. Second bridge (M5973 Brodno) was modelled in the software of VisualFEA. The technical seismicity corresponds with the force impulse, which was put into this model. The model outputs are vibration displacements, velocities and accelerations. The aim of the experiments was measuring of the vibration acceleration time record of bridges, and there was need to systematic placement of accelerometers. The vibration acceleration time record is important during the under - bridge train crossing, about the first bridge (D201 - 00) and the vibration acceleration time domain is important during deducing the force impulse under the bridge, about second bridge (M5973 Brodno). The analysis was done in the software of Sigview. About the first bridge (D201 - 00), the analysis output were values of power spectral density adherent to the frequencies values. These frequencies were compared with the natural frequencies values from the computational model whereby the technical seismicity influence on bridge natural frequencies was found out. About the second bridge (M5973 Brodno), the Sigview display of recorded vibration velocity time history was compared with the final vibration velocity time history from the computational model, whereby the results were incidental.
A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews
NASA Astrophysics Data System (ADS)
Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi
2018-03-01
Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.
Obs4MIPS: Satellite Observations for Model Evaluation
NASA Astrophysics Data System (ADS)
Ferraro, R.; Waliser, D. E.; Gleckler, P. J.
2017-12-01
This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.
Hodson, Nicholas A; Dunne, Stephen M; Pankhurst, Caroline L
2005-04-01
Dental curing lights are vulnerable to contamination with oral fluids during routine intra-oral use. This controlled study aimed to evaluate whether or not disposable transparent barriers placed over the light-guide tip would affect light output intensity or the subsequent depth of cure of a composite restoration. The impact on light intensity emitted from high-, medium- and low-output light-cure units in the presence of two commercially available disposable infection-control barriers was evaluated against a no-barrier control. Power density measurements from the three intensity light-cure units were recorded with a radiometer, then converted to a digital image using an intra-oral camera and values determined using a commercial computer program. For each curing unit, the measurements were repeated on ten separate occasions with each barrier and the control. Depth of cure was evaluated using a scrape test in a natural tooth model. At each level of light output, the two disposable barriers produced a significant reduction in the mean power density readings compared to the no-barrier control (P<0.005). The cure sleeve inhibited light output to a greater extent than either the cling film or the control (P<0.005). Only composite restorations light-activated by the high level unit demonstrated a small but significant decrease in the depth of cure compared to the control (P<0.05). Placing disposable barriers over the light-guide tip reduced the light intensity from all three curing lights. There was no impact on depth of cure except for the high-output light, where a small decrease in cure depth was noted but this was not considered clinically significant. Disposable barriers can be recommended for use with light-cure lights.
DeWitt, Elizabeth S.; Black, Katherine J.; Thiagarajan, Ravi R.; DiNardo, James A.; Colan, Steven D.; McGowan, Francis X.
2016-01-01
Inotropic medications are routinely used to increase cardiac output and arterial blood pressure during critical illness. However, few comparative data exist between these medications, particularly independent of their effects on venous capacitance and systemic vascular resistance. We hypothesized that an isolated working heart model that maintained constant left atrial pressure and aortic blood pressure could identify load-independent differences between inotropic medications. In an isolated heart preparation, the aorta and left atrium of Sprague Dawley rats were cannulated and placed in working mode with fixed left atrial and aortic pressure. Hearts were then exposed to common doses of a catecholamine (dopamine, epinephrine, norepinephrine, or dobutamine), milrinone, or triiodothyronine (n = 10 per dose per combination). Cardiac output, contractility (dP/dtmax), diastolic performance (dP/dtmin and tau), stroke work, heart rate, and myocardial oxygen consumption were compared during each 10-min infusion to an immediately preceding baseline. Of the catecholamines, dobutamine increased cardiac output, contractility, and diastolic performance more than clinically equivalent doses of norepinephrine (second most potent), dopamine, or epinephrine (P < 0.001). The use of triiodothyronine and milrinone was not associated with significant changes in cardiac output, contractility or diastolic function, either alone or added to a baseline catecholamine infusion. Myocardial oxygen consumption was closely related to dP/dtmax (r2 = 0.72), dP/dtmin (r2 = 0.70), and stroke work (r2 = 0.53). In uninjured, isolated working rodent hearts under constant ventricular loading conditions, dobutamine increased contractility and cardiac output more than clinically equivalent doses of norepinephrine, dopamine, and epinephrine; milrinone and triiodothyronine did not have significant effects on contractility. PMID:27150829
Large-scale modelling permafrost distribution in Ötztal, Pitztal and Kaunertal (Tyrol)
NASA Astrophysics Data System (ADS)
Hoinkes, S.; Sailer, R.; Lehning, M.; Steinkogler, W.
2012-04-01
Permafrost is an important element of the global cryosphere, which is seriously affected by climate change. Due to the fact that permafrost is a mostly invisible phenomenon, the area-wide distribution is not properly known. Point measurements are conducted to get information, whether permafrost is present at certain places or not. For an area wide distribution mapping, models have to be built and applied. Different kinds of permafrost distribution models already exist, which are based on different approaches and complexities. Differences in model approaches are mainly due to scaling issues, availability of input data and type of output parameters. In the presented work, we want to map and model the distribution of permafrost in the most elevated parts of the Ötztal, Pitztal and Kaunertal, which are situated in the Eastern European Alps and cover an area of approximately 750 km2. As air temperature is believed to be the best and simplest proxy for energy balance in mountainous regions, we took only the mean annual air temperature from the interpolated ÖKLIM dataset of the Central Institute of Meteorology and Geodynamics to calculate areas with possible presence of permafrost. In a second approach we took a high resolution digital elevation model (DEM) derived by air-borne laser scanning and calculated possible areas with permafrost based on elevation and aspect only which is an established approach among the permafrost community since years. These two simple approaches are compared with each other and in order to validate the model we will compare the outputs with point measurements such as temperature recorded at the snow-soil interface (BTS), continuous temperature data, rock glacier inventories, geophysical measurements. We show that the model based on the mean annual air temperature (≤ -2°C) only, would predict less permafrost in the northerly exposed slopes and in lower elevation than the model based on elevation and aspect. In the southern aspects, more permafrost areas are predicted, but the overall pattern of permafrost distribution is similar. Regarding the input parameters, their different spatial resolutions and the complex topography in high alpine terrain these differences in the results are evident. In a next step these two very simple approaches will be compared to a more complex hydro-meteorological three-dimensional simulation (ALPINE3D). First a one-dimensional model will be used to model permafrost presence at certain points and to calibrate the model parameters, further the model will be applied for the whole investigation area. The model output will be a map of probable permafrost distribution, where energy balance, topography, snow cover, (sub)surface material and land cover is playing a major role.
SDG and qualitative trend based model multiple scale validation
NASA Astrophysics Data System (ADS)
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
An Evaluation of Output Quality of Machine Translation (Padideh Software vs. Google Translate)
ERIC Educational Resources Information Center
Azer, Haniyeh Sadeghi; Aghayi, Mohammad Bagher
2015-01-01
This study aims to evaluate the translation quality of two machine translation systems in translating six different text-types, from English to Persian. The evaluation was based on criteria proposed by Van Slype (1979). The proposed model for evaluation is a black-box type, comparative and adequacy-oriented evaluation. To conduct the evaluation, a…
Scolletta, Sabino; Franchi, Federico; Romagnoli, Stefano; Carlà, Rossella; Donati, Abele; Fabbri, Lea P; Forfori, Francesco; Alonso-Iñigo, José M; Laviola, Silvia; Mangani, Valerio; Maj, Giulia; Martinelli, Giampaolo; Mirabella, Lucia; Morelli, Andrea; Persona, Paolo; Payen, Didier
2016-07-01
Echocardiography and pulse contour methods allow, respectively, noninvasive and less invasive cardiac output estimation. The aim of the present study was to compare Doppler echocardiography with the pulse contour method MostCare for cardiac output estimation in a large and nonselected critically ill population. A prospective multicenter observational comparison study. The study was conducted in 15 European medicosurgical ICUs. We assessed cardiac output in 400 patients in whom an echocardiographic evaluation was performed as a routine need or for cardiocirculatory assessment. None. One echocardiographic cardiac output measurement was compared with the corresponding MostCare cardiac output value per patient, considering different ICU admission categories and clinical conditions. For statistical analysis, we used Bland-Altman and linear regression analyses. To assess heterogeneity in results of individual centers, Cochran Q, and the I statistics were applied. A total of 400 paired echocardiographic cardiac output and MostCare cardiac output measures were compared. MostCare cardiac output values ranged from 1.95 to 9.90 L/min, and echocardiographic cardiac output ranged from 1.82 to 9.75 L/min. A significant correlation was found between echocardiographic cardiac output and MostCare cardiac output (r = 0.85; p < 0.0001). Among the different ICUs, the mean bias between echocardiographic cardiac output and MostCare cardiac output ranged from -0.40 to 0.45 L/min, and the percentage error ranged from 13.2% to 47.2%. Overall, the mean bias was -0.03 L/min, with 95% limits of agreement of -1.54 to 1.47 L/min and a relative percentage error of 30.1%. The percentage error was 24% in the sepsis category, 26% in the trauma category, 30% in the surgical category, and 33% in the medical admission category. The final overall percentage error was 27.3% with a 95% CI of 22.2-32.4%. Our results suggest that MostCare could be an alternative to echocardiography to assess cardiac output in ICU patients with a large spectrum of clinical conditions.
NASA Astrophysics Data System (ADS)
Tang, U. W.; Wang, Z. S.
2008-10-01
Each city has its unique urban form. The importance of urban form on sustainable development has been recognized in recent years. Traditionally, air quality modelling in a city is in a mesoscale with grid resolution of kilometers, regardless of its urban form. This paper introduces a GIS-based air quality and noise model system developed to study the built environment of highly compact urban forms. Compared with traditional mesoscale air quality model system, the present model system has a higher spatial resolution down to individual buildings along both sides of the street. Applying the developed model system in the Macao Peninsula with highly compact urban forms, the average spatial resolution of input and output data is as high as 174 receptor points per km2. Based on this input/output dataset with a high spatial resolution, this study shows that even the highly compact urban forms can be fragmented into a very small geographic scale of less than 3 km2. This is due to the significant temporal variation of urban development. The variation of urban form in each fragment in turn affects air dispersion, traffic condition, and thus air quality and noise in a measurable scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, R.R.; McLellan, T.M.; Withey, W.R.
This report represents the results of TTCP-UTP6 efforts on modeling aspects when chemical protective ensembles are worn which need to be considered in warm environments. Since 1983, a significant data base has been collected using human experimental studies and wide clothing systems from which predictive modeling equations have been developed with individuals working in temperate and hot environments, but few comparisons of the -- results from various model outputs have ever been carried out. This initial comparison study was part of a key technical area (KIA) project for The Technical Cooperation Program (TTCP) UTP-6 working party. A modeling workshop wasmore » conducted in Toronto, Canada on 9-10 June 1994 to discuss the data reduction and results acquired in an initial clothing analysis study of TTCP using various chemical protective garments. To our knowledge, no comprehensive study to date has ever focused on comparing experimental results using an international standardized heat stress procedure matched to physiological outputs from various model predictions in individuals dressed in chemical protective clothing systems. This is the major focus of this TTCP key technical study. This technical report covers one aspect of the working party`s results.« less
Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure
NASA Astrophysics Data System (ADS)
Tsai, C.; Yeh, J. J. J.
2017-12-01
A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.
NASA Astrophysics Data System (ADS)
Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.
2012-04-01
Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.
Comparison of P&O and INC Methods in Maximum Power Point Tracker for PV Systems
NASA Astrophysics Data System (ADS)
Chen, Hesheng; Cui, Yuanhui; Zhao, Yue; Wang, Zhisen
2018-03-01
In the context of renewable energy, the maximum power point tracker (MPPT) is often used to increase the solar power efficiency, taking into account the randomness and volatility of solar energy due to changes in temperature and photovoltaic. In all MPPT techniques, perturb & observe and incremental conductance are widely used in MPPT controllers, because of their simplicity and ease of operation. According to the internal structure of the photovoltaic cell and the output volt-ampere characteristic, this paper established the circuit model and establishes the dynamic simulation model in Matlab/Simulink with the preparation of the s function. The perturb & observe MPPT method and the incremental conductance MPPT method were analyzed and compared by the theoretical analysis and digital simulation. The simulation results have shown that the system with INC MPPT method has better dynamic performance and improves the output power of photovoltaic power generation.
Analysis of information systems for hydropower operations
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W. W. G.
1976-01-01
The operations of hydropower systems were analyzed with emphasis on water resource management, to determine how aerospace derived information system technologies can increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept outlined.
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
RM-SORN: a reward-modulated self-organizing recurrent neural network.
Aswolinskiy, Witali; Pipa, Gordon
2015-01-01
Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.
Analysis of information systems for hydropower operations: Executive summary
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W.
1976-01-01
An analysis was performed of the operations of hydropower systems, with emphasis on water resource management, to determine how aerospace derived information system technologies can effectively increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined in detail to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results were used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept was outlined.
Economic impacts of a transition to higher oil prices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tessmer, Jr, R. G.; Carhart, S. C.; Marcuse, W.
1978-06-01
Economic impacts of sharply higher oil and gas prices in the eighties are estimated using a combination of optimization and input-output models. A 1985 Base Case is compared with a High Case in which crude oil and crude natural gas are, respectively, 2.1 and 1.4 times as expensive as in the Base Case. Impacts examined include delivered energy prices and demands, resource consumption, emission levels and costs, aggregate and compositional changes in gross national product, balance of payments, output, employment, and sectoral prices. Methodology is developed for linking models in both quantity and price space for energy service--specific fuel demands.more » A set of energy demand elasticities is derived which is consistent between alternative 1985 cases and between the 1985 cases and an historical year (1967). A framework and methodology are also presented for allocating portions of the DOE Conservation budget according to broad policy objectives and allocation rules.« less
NASA Technical Reports Server (NTRS)
Bozeman, Robert E.
1987-01-01
An analytic technique for accounting for the joint effects of Earth oblateness and atmospheric drag on close-Earth satellites is investigated. The technique is analytic in the sense that explicit solutions to the Lagrange planetary equations are given; consequently, no numerical integrations are required in the solution process. The atmospheric density in the technique described is represented by a rotating spherical exponential model with superposed effects of the oblate atmosphere and the diurnal variations. A computer program implementing the process is discussed and sample output is compared with output from program NSEP (Numerical Satellite Ephemeris Program). NSEP uses a numerical integration technique to account for atmospheric drag effects.
Tso, P; Lee, T; Demichele, S J
1999-08-01
Comparison was made between the intestinal absorption and lymphatic transport of a randomly interesterified fish oil and medium-chain triglyceride (MCT) structured triglycerides (STG) vs. the physical mix in rat small intestine following ischemia and reperfusion (I/R) injury. Under halothane anesthesia, the superior mesenteric artery (SMA) was occluded for 20 min and then reperfused in I/R rats. The SMA was isolated but not occluded in control rats. In both treatment groups, the mesenteric lymph duct was cannulated and a gastric tube was inserted. Each treatment group received 1 ml of the fish oil-MCT STG or physical mix (7 rats/group) through the gastric tube followed by an infusion of PBS at 3 ml/h for 8 h. Lymph was collected hourly for 8 h. Lymph triglyceride, cholesterol, and decanoic and eicosapentaenoic acids increased rapidly and maintained a significantly higher output (P < 0.01) with STG compared with physical mix in control rats over 8 h. After I/R, lymphatic triglyceride output decreased 50% compared with control. Gastric infusion of STG significantly improved lipid transport by having a twofold higher triglyceride, cholesterol, and decanoic and eicosapentaenoic acids output to lymph compared with its physical mix (P < 0.01). We conclude that STG is absorbed into lymph significantly better than physical mix by both the normal intestine and the intestine injured by I/R.
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
NASA Astrophysics Data System (ADS)
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
NASA Astrophysics Data System (ADS)
Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.
2015-12-01
Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
NASA Astrophysics Data System (ADS)
Alligné, S.; Nicolet, C.; Béguin, A.; Landry, C.; Gomes, J.; Avellan, F.
2017-04-01
The prediction of pressure and output power fluctuations amplitudes on Francis turbine prototype is a challenge for hydro-equipment industry since it is subjected to guarantees to ensure smooth and reliable operation of the hydro units. The European FP7 research project Hyperbole aims to setup a methodology to transpose the pressure fluctuations induced by the cavitation vortex rope from the reduced scale model to the prototype generating units. A Francis turbine unit of 444MW with a specific speed value of ν = 0.29, is considered as case study. A SIMSEN model of the power station including electrical system, controllers, rotating train and hydraulic system with transposed draft tube excitation sources is setup. Based on this model, a frequency analysis of the hydroelectric system is performed for all technologies to analyse potential interactions between hydraulic excitation sources and electrical components. Three technologies have been compared: the classical fixed speed configuration with Synchronous Machine (SM) and the two variable speed technologies which are Doubly Fed Induction Machine (DFIM) and Full Size Frequency Converter (FSFC).
Aircraft Fault Detection Using Real-Time Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2016-01-01
A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Microfabricated Bulk Piezoelectric Transformers
NASA Astrophysics Data System (ADS)
Barham, Oliver M.
Piezoelectric voltage transformers (PTs) can be used to transform an input voltage into a different, required output voltage needed in electronic and electro- mechanical systems, among other varied uses. On the macro scale, they have been commercialized in electronics powering consumer laptop liquid crystal displays, and compete with an older, more prevalent technology, inductive electromagnetic volt- age transformers (EMTs). The present work investigates PTs on smaller size scales that are currently in the academic research sphere, with an eye towards applications including micro-robotics and other small-scale electronic and electromechanical sys- tems. PTs and EMTs are compared on the basis of power and energy density, with PTs trending towards higher values of power and energy density, comparatively, indicating their suitability for small-scale systems. Among PT topologies, bulk disc-type PTs, operating in their fundamental radial extension mode, and free-free beam PTs, operating in their fundamental length extensional mode, are good can- didates for microfabrication and are considered here. Analytical modeling based on the Extended Hamilton Method is used to predict device performance and integrate mechanical tethering as a boundary condition. This model differs from previous PT models in that the electric enthalpy is used to derive constituent equations of motion with Hamilton's Method, and therefore this approach is also more generally applica- ble to other piezoelectric systems outside of the present work. Prototype devices are microfabricated using a two mask process consisting of traditional photolithography combined with micropowder blasting, and are tested with various output electri- cal loads. 4mm diameter tethered disc PTs on the order of .002cm. 3 , two orders smaller than the bulk PT literature, had the followingperformance: a prototype with electrode area ratio (input area / output area) = 1 had peak gain of 2.3 (+/- 0.1), efficiency of 33 (+/- 0.1)% and output power density of 51.3 (+/- 4.0)W cm. -3 (for output power of80 (+/- 6)mW) at 1M? load, for an input voltage range of 3V-6V (+/- one standard deviation). The gain results are similar to those of several much larger bulk devices in the literature, but the efficiencies of the present devices are lower. Rectangular topology, free-free beam devices were also microfabricated across 3 or- ders of scale by volume, with the smallest device on the order of .00002cm. 3 . These devices exhibited higher quality factorsand efficiencies, in some cases, compared to circular devices, but lower peak gain (by roughly 1/2 ). Limitations of the microfab- rication process are determined, and future work is proposed. Overall, the devices fabricated in the present work show promise for integration into small-scale engi- neered systems, but improvements can be made in efficiency, and potentially voltage gain, depending on the application.
Guinotte, J.M.; Bartley, J.D.; Iqbal, A.; Fautin, D.G.; Buddemeier, R.W.
2006-01-01
We demonstrate the KGSMapper (Kansas Geological Survey Mapper), a straightforward, web-based biogeographic tool that uses environmental conditions of places where members of a taxon are known to occur to find other places containing suitable habitat for them. Using occurrence data for anemonefishes or their host sea anemones, and data for environmental parameters, we generated maps of suitable habitat for the organisms. The fact that the fishes are obligate symbionts of the anemones allowed us to validate the KGSMapper output: we were able to compare the inferred occurrence of the organism to that of the actual occurrence of its symbiont. Characterizing suitable habitat for these organisms in the Indo-West Pacific, the region where they naturally occur, can be used to guide conservation efforts, field work, etc.; defining suitable habitat for them in the Atlantic and eastern Pacific is relevant to identifying areas vulnerable to biological invasions. We advocate distinguishing between these 2 sorts of model output, terming the former maps of realized habitat and the latter maps of potential habitat. Creation of a niche model requires adding biotic data to the environmental data used for habitat maps: we included data on fish occurrences to infer anemone distribution and vice versa. Altering the selection of environmental variables allowed us to investigate which variables may exert the most influence on organism distribution. Adding variables does not necessarily improve precision of the model output. KGSMapper output distinguishes areas that fall within 1 standard deviation (SD) of the mean environmental variable values for places where members of the taxon occur, within 2 SD, and within the entire range of values; eliminating outliers or data known to be imprecise or inaccurate improved output precision mainly in the 2 SD range and beyond. Thus, KGSMapper is robust in the face of questionable data, offering the user a way to recognize and clean such data. It also functions well with sparse datasets. These features make it useful for biogeographic meta-analyses with the diverse, distributed datasets that are typical for marine organisms lacking direct commercial value. ?? Inter-Research 2006.
Enhanced electrostatic vibrational energy harvesting using integrated opposite-charged electrets
NASA Astrophysics Data System (ADS)
Tao, Kai; Wu, Jin; Tang, Lihua; Hu, Liangxing; Woh Lye, Sun; Miao, Jianmin
2017-04-01
This paper presents a sandwich-structured MEMS electret-based vibrational energy harvester (e-VEH) that has two opposite-charged electrets integrated into a single electrostatic device. Compared to the conventional two-plate configuration where the maximum charge can only be induced when the movable mass reaches its lowest position, the proposed harvester is capable of creating maximum charge induction at both the highest and the lowest extremes, leading to an enhanced output performance. As a proof of concept, an out-of-plane MEMS e-VEH device with an overall volume of about 0.24 cm3 is designed, modeled, fabricated and characterized. A holistic equivalent circuit model incorporating the mechanical dynamic model and two capacitive circuits has been established to study the charge circulations. With the fabricated prototype, the experimental analysis demonstrates the superior performance of the proposed sandwiched e-VEH: the output voltage increases by 80.9% and 18.6% at an acceleration of 5 m s-2 compared to the top electret alone and bottom electret alone configurations, respectively. The experimental results also confirm the waveform derivation with the increase of excitation, which is in good agreement with the circuit simulation results. The proposed sandwiched e-VEH topology provides an effective and convenient methodology for improving the performance of electrostatic energy harvesting devices.
Chan, Caroline; Heinbokel, John F; Myers, John A; Jacobs, Robert R
2012-10-01
A complex interplay of factors determines the degree of bioaccumulation of Hg in fish in any particular basin. Although certain watershed characteristics have been associated with higher or lower bioaccumulation rates, the relationships between these characteristics are poorly understood. To add to this understanding, a dynamic model was built to examine these relationships in stream systems. The model follows Hg from the water column, through microbial conversion and subsequent concentration, through the food web to piscivorous fish. The model was calibrated to 7 basins in Kentucky and further evaluated by comparing output to 7 sites in, or proximal to, the Ohio River Valley, an underrepresented region in the bioaccumulation literature. Water quality and basin characteristics were inputs into the model, with tissue concentrations of Hg of generic trophic level 3, 3.5, and 4 fish the output. Regulatory and monitoring data were used to calibrate and evaluate the model. Mean average prediction error for Kentucky sites was 26%, whereas mean error for evaluation sites was 51%. Variability within natural systems can be substantial and was quantified for fish tissue by analysis of the US Geological Survey National Fish Database. This analysis pointed to the need for more systematic sampling of fish tissue. Analysis of model output indicated that parameters that had the greatest impact on bioaccumulation influenced the system at several points. These parameters included forested and wetlands coverage and nutrient levels. Factors that were less sensitive modified the system at only 1 point and included the unfiltered total Hg input and the portion of the basin that is developed. Copyright © 2012 SETAC.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
A first approach to the distortion analysis of nonlinear analog circuits utilizing X-parameters
NASA Astrophysics Data System (ADS)
Weber, H.; Widemann, C.; Mathis, W.
2013-07-01
In this contribution a first approach to the distortion analysis of nonlinear 2-port-networks with X-parameters1 is presented. The X-parameters introduced by Verspecht and Root (2006) offer the possibility to describe nonlinear microwave 2-port-networks under large signal conditions. On the basis of X-parameter measurements with a nonlinear network analyzer (NVNA) behavioral models can be extracted for the networks. These models can be used to consider the nonlinear behavior during the design process of microwave circuits. The idea of the present work is to extract the behavioral models in order to describe the influence of interfering signals on the output behavior of the nonlinear circuits. Hereby, a simulator is used instead of a NVNA to extract the X-parameters. Assuming that the interfering signals are relatively small compared to the nominal input signal, the output signal can be described as a superposition of the effects of each input signal. In order to determine the functional correlation between the scattering variables, a polynomial dependency is assumed. The required datasets for the approximation of the describing functions are simulated by a directional coupler model in Cadence Design Framework. The polynomial coefficients are obtained by a least-square method. The resulting describing functions can be used to predict the system's behavior under certain conditions as well as the effects of the interfering signal on the output signal. 1 X-parameter is a registered trademark of Agilent Technologies, Inc.
A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation
Wang, Zeyu; Wang, Jianhui; Chen, Chen
2016-12-07
Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less
A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zeyu; Wang, Jianhui; Chen, Chen
Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less
Design and experimental verification of an improved magnetostrictive energy harvester
NASA Astrophysics Data System (ADS)
Germer, M.; Marschner, U.; Flatau, A. B.
2017-04-01
This paper summarizes and extends the modeling state of the art of magnetostrictive energy harvesters with a focus on the pick-up coil design. The harvester is a one-sided clamped galfenol unimorph loaded with two brass pieces each containing a permanent magnet to create a biased magnetic field. Measurements on different pick-up coils were conducted and compared with results from an analytic model. Resistance, mass and inductance were formulated and proved by measurements. Both the length for a constant number of turns and the number of turns for a constant coil length were also modeled and varied. The results confirm that the output voltage depends on the coil length for a constant number of turns and is higher for smaller coils. In contrast to a uniform magnetic field, the maximal output voltage is gained if the coil is placed not directly at but near the fixation. Two effects explain this behavior: Due to the permanent magnet next to the fixation, the magnetic force is higher and orientates the magnetic domains stronger. The clamping locally increases the stress and forces the magnetic domains to orientate, too. For that reason the material is stiffer and therefore the strain smaller. The tradeoff between a higher induced voltage in the coil and an increasing inductance and resistance for every additional turn are presented together with an experimental validation of the models. Based on the results guidelines are given to design an optimal coil which maximizes the output power for a given unimorph.
Signals and circuits in the purkinje neuron.
Abrams, Zéev R; Zhang, Xiang
2011-01-01
Purkinje neurons (PN) in the cerebellum have over 100,000 inputs organized in an orthogonal geometry, and a single output channel. As the sole output of the cerebellar cortex layer, their complex firing pattern has been associated with motor control and learning. As such they have been extensively modeled and measured using tools ranging from electrophysiology and neuroanatomy, to dynamic systems and artificial intelligence methods. However, there is an alternative approach to analyze and describe the neuronal output of these cells using concepts from electrical engineering, particularly signal processing and digital/analog circuits. By viewing the PN as an unknown circuit to be reverse-engineered, we can use the tools that provide the foundations of today's integrated circuits and communication systems to analyze the Purkinje system at the circuit level. We use Fourier transforms to analyze and isolate the inherent frequency modes in the PN and define three unique frequency ranges associated with the cells' output. Comparing the PN to a signal generator that can be externally modulated adds an entire level of complexity to the functional role of these neurons both in terms of data analysis and information processing, relying on Fourier analysis methods in place of statistical ones. We also re-describe some of the recent literature in the field, using the nomenclature of signal processing. Furthermore, by comparing the experimental data of the past decade with basic electronic circuitry, we can resolve the outstanding controversy in the field, by recognizing that the PN can act as a multivibrator circuit.
NASA Technical Reports Server (NTRS)
Rockey, D. E.
1979-01-01
A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.
Gohean, Jeffrey R; George, Mitchell J; Pate, Thomas D; Kurusz, Mark; Longoria, Raul G; Smalling, Richard W
2013-01-01
The purpose of this investigation is to use a computational model to compare a synchronized valveless pulsatile left ventricular assist device with continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate the support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous pulsatile valveless dual-piston positive displacement pump. These results were compared with measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared with the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device.
Gohean, Jeffrey R.; George, Mitchell J.; Pate, Thomas D.; Kurusz, Mark; Longoria, Raul G.; Smalling, Richard W.
2012-01-01
The purpose of this investigation is to utilize a computational model to compare a synchronized valveless pulsatile left ventricular assist device to continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous, pulsatile, valveless, dual piston positive displacement pump. These results were compared to measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared to the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device. PMID:23438771
Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles
Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.
2009-01-01
The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382
NASA Astrophysics Data System (ADS)
Shin, Henry; Suresh, Nina L.; Zev Rymer, William; Hu, Xiaogang
2018-02-01
Objective. Chronic muscle weakness impacts the majority of individuals after a stroke. The origins of this hemiparesis is multifaceted, and an altered spinal control of the motor unit (MU) pool can lead to muscle weakness. However, the relative contribution of different MU recruitment and discharge organization is not well understood. In this study, we sought to examine these different effects by utilizing a MU simulation with variations set to mimic the changes of MU control in stroke. Approach. Using a well-established model of the MU pool, this study quantified the changes in force output caused by changes in MU recruitment range and recruitment order, as well as MU firing rate organization at the population level. We additionally expanded the original model to include a fatigue component, which variably decreased the output force with increasing length of contraction. Differences in the force output at both the peak and fatigued time points across different excitation levels were quantified and compared across different sets of MU parameters. Main results. Across the different simulation parameters, we found that the main driving factor of the reduced force output was due to the compressed range of MU recruitment. Recruitment compression caused a decrease in total force across all excitation levels. Additionally, a compression of the range of MU firing rates also demonstrated a decrease in the force output mainly at the higher excitation levels. Lastly, changes to the recruitment order of MUs appeared to minimally impact the force output. Significance. We found that altered control of MUs alone, as simulated in this study, can lead to a substantial reduction in muscle force generation in stroke survivors. These findings may provide valuable insight for both clinicians and researchers in prescribing and developing different types of therapies for the rehabilitation and restoration of lost strength after stroke.
NASA Astrophysics Data System (ADS)
Stunder, B.
2009-12-01
Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.
Fernandez, Fernando R.; Malerba, Paola; White, John A.
2015-01-01
The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971
Fernandez, Fernando R; Malerba, Paola; White, John A
2015-04-01
The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2017-04-01
Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.
Safaei, Soroush; Blanco, Pablo J; Müller, Lucas O; Hellevik, Leif R; Hunter, Peter J
2018-01-01
We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data.
NASA Astrophysics Data System (ADS)
Jamali, M. S.; Ismail, K. A.; Taha, Z.; Aiman, M. F.
2017-10-01
In designing suitable isolators to reduce unwanted vibration in vehicles, the response from a mathematical model which characterizes the transmissibility ratio of the input and output of the vehicle is required. In this study, a Matlab Simulink model is developed to study the dynamic behaviour performance of passive suspension system for a lightweight electric vehicle. The Simulink model is based on the two degrees of freedom system quarter car model. The model is compared to the theoretical plots of the transmissibility ratios between the amplitudes of the displacements and accelerations of the sprung and unsprung masses to the amplitudes of the ground, against the frequencies at different damping values. It was found that the frequency responses obtained from the theoretical calculations and from the Simulink simulation is comparable to each other. Hence, the model may be extended to a full vehicle model.
NASA Astrophysics Data System (ADS)
Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.
1998-03-01
Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, A.D.
1979-05-01
The report provides an overview of the MULTIREGION model and its use to determine the regional economic implications of three energy and economic projections developed for use in the EIA's 1977 Annual Report to Congress. The MULTIREGION projections are compared with similar projections undertaken using the Regional Earnings Impact System (REIS), developed and maintained by EIA. The strengths and weaknesses of the two modeling systems are reviewed. Examples of the MULTIREGION projection output are presented in an appendix. (MCW)
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
NASA Astrophysics Data System (ADS)
Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang
2014-04-01
This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.
Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei
2014-07-20
We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.
NASA Astrophysics Data System (ADS)
Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.
2017-12-01
The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.
Generalized Polynomial Chaos Based Uncertainty Quantification for Planning MRgLITT Procedures
Fahrenholtz, S.; Stafford, R. J.; Maier, F.; Hazle, J. D.; Fuentes, D.
2014-01-01
Purpose A generalized polynomial chaos (gPC) method is used to incorporate constitutive parameter uncertainties within the Pennes representation of bioheat transfer phenomena. The stochastic temperature predictions of the mathematical model are critically evaluated against MR thermometry data for planning MR-guided Laser Induced Thermal Therapies (MRgLITT). Methods Pennes bioheat transfer model coupled with a diffusion theory approximation of laser tissue interaction was implemented as the underlying deterministic kernel. A probabilistic sensitivity study was used to identify parameters that provide the most variance in temperature output. Confidence intervals of the temperature predictions are compared to MR temperature imaging (MRTI) obtained during phantom and in vivo canine (n=4) MRgLITT experiments. The gPC predictions were quantitatively compared to MRTI data using probabilistic linear and temporal profiles as well as 2-D 60 °C isotherms. Results Within the range of physically meaningful constitutive values relevant to the ablative temperature regime of MRgLITT, the sensitivity study indicated that the optical parameters, particularly the anisotropy factor, created the most variance in the stochastic model's output temperature prediction. Further, within the statistical sense considered, a nonlinear model of the temperature and damage dependent perfusion, absorption, and scattering is captured within the confidence intervals of the linear gPC method. Multivariate stochastic model predictions using parameters with the dominant sensitivities show good agreement with experimental MRTI data. Conclusions Given parameter uncertainties and mathematical modeling approximations of the Pennes bioheat model, the statistical framework demonstrates conservative estimates of the therapeutic heating and has potential for use as a computational prediction tool for thermal therapy planning. PMID:23692295
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Shabani, Farzin; Kumar, Lalit
2013-01-01
Global climate model outputs involve uncertainties in prediction, which could be reduced by identifying agreements between the output results of different models, covering all assumptions included in each. Fusarium oxysporum f.sp. is an invasive pathogen that poses risk to date palm cultivation, among other crops. Therefore, in this study, the future distribution of invasive Fusarium oxysporum f.sp., confirmed by CSIRO-Mk3.0 (CS) and MIROC-H (MR) GCMs, was modeled and combined with the future distribution of date palm predicted by the same GCMs, to identify areas suitable for date palm cultivation with different risk levels of invasive Fusarium oxysporum f.sp., for 2030, 2050, 2070 and 2100. Results showed that 40%, 37%, 33% and 28% areas projected to become highly conducive to date palm are under high risk of its lethal fungus, compared with 37%, 39%, 43% and 42% under low risk, for the chosen years respectively. Our study also indicates that areas with marginal risk will be limited to 231, 212, 186 and 172 million hectares by 2030, 2050, 2070 and 2100. The study further demonstrates that CLIMEX outputs refined by a combination of different GCMs results of different species that have symbiosis or parasite relationship, ensure that the predictions become robust, rather than producing hypothetical findings, limited purely to publication.
Shabani, Farzin; Kumar, Lalit
2013-01-01
Global climate model outputs involve uncertainties in prediction, which could be reduced by identifying agreements between the output results of different models, covering all assumptions included in each. Fusarium oxysporum f.sp. is an invasive pathogen that poses risk to date palm cultivation, among other crops. Therefore, in this study, the future distribution of invasive Fusarium oxysporum f.sp., confirmed by CSIRO-Mk3.0 (CS) and MIROC-H (MR) GCMs, was modeled and combined with the future distribution of date palm predicted by the same GCMs, to identify areas suitable for date palm cultivation with different risk levels of invasive Fusarium oxysporum f.sp., for 2030, 2050, 2070 and 2100. Results showed that 40%, 37%, 33% and 28% areas projected to become highly conducive to date palm are under high risk of its lethal fungus, compared with 37%, 39%, 43% and 42% under low risk, for the chosen years respectively. Our study also indicates that areas with marginal risk will be limited to 231, 212, 186 and 172 million hectares by 2030, 2050, 2070 and 2100. The study further demonstrates that CLIMEX outputs refined by a combination of different GCMs results of different species that have symbiosis or parasite relationship, ensure that the predictions become robust, rather than producing hypothetical findings, limited purely to publication. PMID:24340100
Shin, Dong Ah; Park, Jiheum; Lee, Jung Chan; Shin, Sang Do; Kim, Hee Chan
2017-03-01
The passive leg-raising (PLR) maneuver has been used for patients with circulatory failure to improve hemodynamic responsiveness by increasing cardiac output, which should also be beneficial and may exert synergetic effects during cardiopulmonary resuscitation (CPR). However, the impact of the PLR maneuver on CPR remains unclear due to difficulties in monitoring cardiac output in real-time during CPR and a lack of clinical evidence. We developed a computational model that couples hemodynamic behavior during standard CPR and the PLR maneuver, and simulated the model by applying different angles of leg raising from 0° to 90° and compression rates from 80/min to 160/min. The simulation results showed that the PLR maneuver during CPR significantly improves cardiac output (CO), systemic perfusion pressure (SPP) and coronary perfusion pressure (CPP) by ∼40-65% particularly under the recommended range of compression rates between 100/min and 120/min with 45° of leg raise, compared to standard CPR. However, such effects start to wane with further leg lifts, indicating the existence of an optimal angle of leg raise for each person to achieve the best hemodynamic responses. We developed a CPR-PLR model and demonstrated the effects of PLR on hemodynamics by investigating changes in CO, SPP, and CPP under different compression rates and angles of leg raising. Our computational model will facilitate study of PLR effects during CPR and the development of an advanced model combined with circulatory disorders, which will be a valuable asset for further studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guiot, J.
2017-12-01
In the last decades, climate reconstruction has much evolved. A important step has been passed with inverse modelling approach proposed by Guiot et al (2000). It is based on appropriate algorithms in the frame of the Bayesian statistical theory to estimate the inputs of a vegetation model when the outputs are known. The inputs are the climate variables that we want to reconstruct and the outputs are vegetation characteristics, which can be compared to pollen data. The Bayesian framework consists in defining prior distribution of the wanted climate variables and in using data and a model to estimate posterior probability distribution. The main interest of the method is the possibility to set different values of exogenous variables as the atmospheric CO2 concentration. The fact that the CO2 concentration has an influence on the photosynthesis and that its level is different between the calibration period (the 20th century) and the past, there is an important risk of biases on the reconstructions. After that initial paper, numerous papers have been published showing the interested of the method. In that approach, the prior distribution is fixed by educated guess of by using complementary information on the expected climate (other proxies or other records). In the data assimilation approach, the prior distribution is provided by a climate model. The use of a vegetation model together with proxy data, enable to calculate posterior distributions. Data assimilation consists in constraining climate model to reproduce estimates relatively close to the data, taking into account the respective errors of the data and of the climate model (Dubinkina et al, 2011). We compare both approaches using pollen data for the Holocene from the Mediterranean. Pollen data have been extracted from the European Pollen Database. The earth system model, LOVECLIM, is run to simulate Holocene climate with appropriate boundary conditions and realistic forcing. Simulated climate variables (temperature, precipitation and sunshine) are used as the forcing parameters to a vegetation model, BIOME4, that calculates the equilibrium distribution of vegetation types and associated phenological, hydrological and biogeochemical properties. BIOME4 output, constrained with the pollen observations, are off-line coupled using a particle filter technique.
NASA Astrophysics Data System (ADS)
Bossuyt, Juliaan; Howland, Michael; Meneveau, Charles; Meyers, Johan
2015-11-01
To optimize wind farm layouts for a maximum power output and wind turbine lifetime, mean power output measurements in wind tunnel studies are not sufficient. Instead, detailed temporal information about the power output and unsteady loading from every single wind turbine in the wind farm is needed. A very small porous disc model with a realistic thrust coefficient of 0.75 - 0.85, was designed. The model is instrumented with a strain gage, allowing measurements of the thrust force, incoming velocity and power output with a frequency response up to the natural frequency of the model. This is shown by reproducing the -5/3 spectrum from the incoming flow. Thanks to its small size and compact instrumentation, the model allows wind tunnel studies of large wind turbine arrays with detailed temporal information from every wind turbine. Translating to field conditions with a length-scale ratio of 1:3,000 the frequencies studied from the data reach from 10-4 Hz up to about 6 .10-2 Hz. The model's capabilities are demonstrated with a large wind farm measurement consisting of close to 100 instrumented models. A high correlation is found between the power outputs of stream wise aligned wind turbines, which is in good agreement with results from prior LES simulations. Work supported by ERC (ActiveWindFarms, grant no. 306471) and by NSF (grants CBET-113380 and IIA-1243482, the WINDINSPIRE project).
Energetics of glucose metabolism: a phenomenological approach to metabolic network modeling.
Diederichs, Frank
2010-08-12
A new formalism to describe metabolic fluxes as well as membrane transport processes was developed. The new flux equations are comparable to other phenomenological laws. Michaelis-Menten like expressions, as well as flux equations of nonequilibrium thermodynamics, can be regarded as special cases of these new equations. For metabolic network modeling, variable conductances and driving forces are required to enable pathway control and to allow a rapid response to perturbations. When applied to oxidative phosphorylation, results of simulations show that whole oxidative phosphorylation cannot be described as a two-flux-system according to nonequilibrium thermodynamics, although all coupled reactions per se fulfill the equations of this theory. Simulations show that activation of ATP-coupled load reactions plus glucose oxidation is brought about by an increase of only two different conductances: a [Ca(2+)] dependent increase of cytosolic load conductances, and an increase of phosphofructokinase conductance by [AMP], which in turn becomes increased through [ADP] generation by those load reactions. In ventricular myocytes, this feedback mechanism is sufficient to increase cellular power output and O(2) consumption several fold, without any appreciable impairment of energetic parameters. Glucose oxidation proceeds near maximal power output, since transformed input and output conductances are nearly equal, yielding an efficiency of about 0.5. This conductance matching is fulfilled also by glucose oxidation of β-cells. But, as a price for the metabolic mechanism of glucose recognition, β-cells have only a limited capability to increase their power output.
A mission-based productivity compensation model for an academic anesthesiology department.
Reich, David L; Galati, Maria; Krol, Marina; Bodian, Carol A; Kahn, Ronald A
2008-12-01
We replaced a nearly fixed-salary academic physician compensation model with a mission-based productivity model with the goal of improving attending anesthesiologist productivity. The base salary system was stratified according to rank and clinical experience. The supplemental pay structure was linked to electronic patient records and a scheduling database to award points for clinical activity; educational, research, and administrative points systems were constructed in parallel. We analyzed monthly American Society of Anesthesiologist (ASA) unit data for operating room activity and physician compensation from 2000 through mid-2007, excluding the 1-yr implementation period (July 2004-June 2005) for the new model. Comparing 2005-2006 with 2000-2004, quarterly ASA units increased by 14% (P = 0.0001) and quarterly ASA units per full-time equivalent increased by 31% (P < 0.0001), while quarterly ASA units per anesthetizing location decreased by 10% (P = 0.046). Compared with a baseline year (2001), Instructor and Assistant Professor faculty compensation increased more than Associate Professor and Professor faculty (P < 0.001) in both pre- and postimplementation periods. There were larger compensation increases for the postimplementation period compared with preimplementation across faculty rank groupings (P < 0.0001). Academic and educational output was stable. Implementing a productivity-based faculty compensation model in an academic department was associated with increased mean supplemental pay with relatively fewer faculty. ASA units per month and ASA units per operating room full-time equivalent increased, and these metrics are the most likely drivers of the increased compensation. This occurred despite a slight decrease in clinical productivity as measured by ASA units per anesthetizing location. Academic and educational output was stable.
Less water: How will agriculture in Southern Mountain states adapt?
NASA Astrophysics Data System (ADS)
Frisvold, George B.; Konyar, Kazim
2012-05-01
This study examined how agriculture in six southwestern states might adapt to large reductions in water supplies, using the U.S. Agricultural Resource Model (USARM), a multiregion, multicommodity agricultural sector model. In the simulation, irrigation water supplies were reduced 25% in five Southern Mountain (SM) states and by 5% in California. USARM results were compared to those from a "rationing" model, which assumes no input substitution or changes in water use intensity, relying on land fallowing as the only means of adapting to water scarcity. The rationing model also ignores changes in output prices. Results quantify the importance of economic adjustment mechanisms and changes in output prices. Under the rationing model, SM irrigators lose 65 in net income. Compared to this price exogenous, "land-fallowing only" response, allowing irrigators to change cropping patterns, practice deficit irrigation, and adjust use of other inputs reduced irrigator costs of water shortages to 22 million. Allowing irrigators to pass on price increases to purchasers reduced income losses further, to 15 million. Higher crop prices from reduced production imposed direct losses of 130 million on first purchasers of crops, which include livestock and dairy producers, and cotton gins. SM agriculture, as a whole, was resilient to the water supply shock, with production of high value specialty crops along the Lower Colorado River little affected. Particular crops were vulnerable however. Cotton production and net returns fell substantially, while reductions in water devoted to alfalfa accounted for 57% of regional water reduction.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Approximate Optimal Control as a Model for Motor Learning
ERIC Educational Resources Information Center
Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.
2005-01-01
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…
Establishment and analysis of High-Resolution Assimilation Dataset of water-energy cycle over China
NASA Astrophysics Data System (ADS)
Wen, Xiaohang; Liao, Xiaohan; Dong, Wenjie; Yuan, Wenping
2015-04-01
For better prediction and understanding of water-energy exchange process and land-atmospheric interaction, the in-situ observed meteorological data which were acquired from China Meteorological Administration (CMA) were assimilated in the Weather Research and Forecasting (WRF) model and the monthly Green Vegetation Coverage (GVF) data, which was calculated by the Normalized Difference Vegetation Index (NDVI) of Earth Observing System Moderate-Resolution Imaging Spectroradiometer (EOS-MODIS), Digital Elevation Model (DEM) data of the Shuttle Radar Topography Mission (SRTM) system were also integrated in the WRF model over China. Further, the High-Resolution Assimilation Dataset of water-energy cycle over China (HRADC) was produced by WRF model. This dataset include 25 km horizontal resolution near surface meteorological data such as air temperature, humidity, ground temperature, and pressure at 19 levels, soil temperature and soil moisture at 4 levels, green vegetation coverage, latent heat flux, sensible heat flux, and ground heat flux for 3 hours. In this study, we 1) briefly introduce the cycling 3D-Var assimilation method; 2) Compare results of meteorological elements such as 2 m temperature, precipitation and ground temperature generated by the HRADC with the gridded observation data from CMA, and Global Land Data Assimilation System (GLDAS) output data from National Aeronautics and Space Administration (NASA). It is found that the results of 2 m temperature were improved compared with the control simulation and has effectively reproduced the observed patterns, and the simulated results of ground temperature, 0-10 cm soil temperature and specific humidity were as much closer to GLDAS outputs. Root mean square errors are reduced in assimilation run than control run, and the assimilation run of ground temperature, 0-10 cm soil temperature, radiation and surface fluxes were agreed well with the GLDAS outputs over China. The HRADC could be used in further research on the long period climatic effects and characteristics of water-energy cycle over China.
Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping
2011-06-01
It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
System identification of an unmanned quadcopter system using MRAN neural
NASA Astrophysics Data System (ADS)
Pairan, M. F.; Shamsudin, S. S.
2017-12-01
This project presents the performance analysis of the radial basis function neural network (RBF) trained with Minimal Resource Allocating Network (MRAN) algorithm for real-time identification of quadcopter. MRAN’s performance is compared with the RBF with Constant Trace algorithm for 2500 input-output pair data sampling. MRAN utilizes adding and pruning hidden neuron strategy to obtain optimum RBF structure, increase prediction accuracy and reduce training time. The results indicate that MRAN algorithm produces fast training time and more accurate prediction compared with standard RBF. The model proposed in this paper is capable of identifying and modelling a nonlinear representation of the quadcopter flight dynamics.
Computational Investigation of Helical Traveling Wave Tube Transverse RF Field Forces
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A.
1998-01-01
In a previous study using a fully three-dimensional (3D) helical slow-wave circuit cold- test model it was found, contrary to classical helical circuit analyses, that transverse FF electric fields have significant amplitudes compared with the longitudinal component. The RF fields obtained using this helical cold-test model have been scaled to correspond to those of an actual TWT. At the output of the tube, RF field forces reach 61%, 26% and 132% for radial, azimuthal and longitudinal components, respectively, compared to radial space charge forces indicating the importance of considering them in the design of electron beam focusing.
Using quantum theory to simplify input-output processes
NASA Astrophysics Data System (ADS)
Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile
2017-02-01
All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Extended range radiation dose-rate monitor
Valentine, Kenneth H.
1988-01-01
An extended range dose-rate monitor is provided which utilizes the pulse pileup phenomenon that occurs in conventional counting systems to alter the dynamic response of the system to extend the dose-rate counting range. The current pulses from a solid-state detector generated by radiation events are amplified and shaped prior to applying the pulses to the input of a comparator. The comparator generates one logic pulse for each input pulse which exceeds the comparator reference threshold. These pulses are integrated and applied to a meter calibrated to indicate the measured dose-rate in response to the integrator output. A portion of the output signal from the integrator is fed back to vary the comparator reference threshold in proportion to the output count rate to extend the sensitive dynamic detection range by delaying the asymptotic approach of the integrator output toward full scale as measured by the meter.