Towards process-informed bias correction of climate change simulations
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.
2017-11-01
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
Mosquito population dynamics from cellular automata-based simulation
NASA Astrophysics Data System (ADS)
Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning
2016-02-01
In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.
Torres, Jaume; Briggs, John A G; Arkin, Isaiah T
2002-01-01
Molecular interactions between transmembrane alpha-helices can be explored using global searching molecular dynamics simulations (GSMDS), a method that produces a group of probable low energy structures. We have shown previously that the correct model in various homooligomers is always located at the bottom of one of various possible energy basins. Unfortunately, the correct model is not necessarily the one with the lowest energy according to the computational protocol, which has resulted in overlooking of this parameter in favor of experimental data. In an attempt to use energetic considerations in the aforementioned analysis, we used global searching molecular dynamics simulations on three homooligomers of different sizes, the structures of which are known. As expected, our results show that even when the conformational space searched includes the correct structure, taking together simulations using both left and right handedness, the correct model does not necessarily have the lowest energy. However, for the models derived from the simulation that uses the correct handedness, the lowest energy model is always at, or very close to, the correct orientation. We hypothesize that this should also be true when simulations are performed using homologous sequences, and consequently lowest energy models with the right handedness should produce a cluster around a certain orientation. In contrast, using the wrong handedness the lowest energy structures for each sequence should appear at many different orientations. The rationale behind this is that, although more than one energy basin may exist, basins that do not contain the correct model will shift or disappear because they will be destabilized by at least one conservative (i.e. silent) mutation, whereas the basin containing the correct model will remain. This not only allows one to point to the possible handedness of the bundle, but can be used to overcome ambiguities arising from the use of homologous sequences in the analysis of global searching molecular dynamics simulations. In addition, because clustering of lowest energy models arising from homologous sequences only happens when the estimation of the helix tilt is correct, it may provide a validation for the helix tilt estimate. PMID:12023229
Characterizing bias correction uncertainty in wheat yield predictions
NASA Astrophysics Data System (ADS)
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Winterhalter, Wade E.
2011-09-01
Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Simulating an underwater vehicle self-correcting guidance system with Simulink
NASA Astrophysics Data System (ADS)
Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe
2008-09-01
Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
Adjustment of spatio-temporal precipitation patterns in a high Alpine environment
NASA Astrophysics Data System (ADS)
Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter
2018-01-01
This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.
The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model
ERIC Educational Resources Information Center
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo
2017-01-01
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Addor, Nans; Seibert, Jan
2017-04-01
Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.
VizieR Online Data Catalog: STAGGER-grid of 3D stellar models. V. (Chiavassa+, 2018)
NASA Astrophysics Data System (ADS)
Chiavassa, A.; Casagrande, L.; Collet, R.; Magic, Z.; Bigot, L.; Thevenin, F.; Asplund, M.
2018-01-01
Table B0: RHD simulations' stellar parameters, bolometric magnitude, and bolometric correction for Johnson-Cousins, 2MASS, SDSS (columns 13 to 17), and Gaia systems Table 4: RHD simulations' stellar parameters, bolometric magnitude, and bolometric correction for SkyMapper photometric system, and Stroemgren index b-y, m1=(v-b)-(b-y), and c1=(u-v)-(v-b) Table 5: RHD simulations' stellar parameters, bolometric magnitude, and bolometric correction for the HST-WFC3 in VEGA system Table 6: RHD simulations' stellar parameters, bolometric magnitude, and bolometric correction for the HST-WFC3 in ST system Table 7: RHD simulations' stellar parameters, bolometric magnitude, and bolometric correction for the HST-WFC3 in AB system (5 data files).
An integrated modeling approach to predict flooding on urban basin.
Dey, Ashis Kumar; Kamioka, Seiji
2007-01-01
Correct prediction of flood extents in urban catchments has become a challenging issue. The traditional urban drainage models that consider only the sewerage-network are able to simulate the drainage system correctly until there is no overflow from the network inlet or manhole. When such overflows exist due to insufficient drainage capacity of downstream pipes or channels, it becomes difficult to reproduce the actual flood extents using these traditional one-phase simulation techniques. On the other hand, the traditional 2D models that simulate the surface flooding resulting from rainfall and/or levee break do not consider the sewerage network. As a result, the correct flooding situation is rarely addressed from those available traditional 1D and 2D models. This paper presents an integrated model that simultaneously simulates the sewerage network, river network and 2D mesh network to get correct flood extents. The model has been successfully applied into the Tenpaku basin (Nagoya, Japan), which experienced severe flooding with a maximum flood depth more than 1.5 m on September 11, 2000 when heavy rainfall, 580 mm in 28 hrs (return period > 100 yr), occurred over the catchments. Close agreements between the simulated flood depths and observed data ensure that the present integrated modeling approach is able to reproduce the urban flooding situation accurately, which rarely can be obtained through the traditional 1D and 2D modeling approaches.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
Pea, Rany; Dansereau, Jean; Caouette, Christiane; Cobetto, Nikita; Aubin, Carl-Éric
2018-05-01
Orthopedic braces made by Computer-Aided Design and Manufacturing and numerical simulation were shown to improve spinal deformities correction in adolescent idiopathic scoliosis while using less material. Simulations with BraceSim (Rodin4D, Groupe Lagarrigue, Bordeaux, France) require a sagittal radiograph, not always available. The objective was to develop an innovative modeling method based on a single coronal radiograph and surface topography, and assess the effectiveness of braces designed with this approach. With a patient coronal radiograph and a surface topography, the developed method allowed the 3D reconstruction of the spine, rib cage and pelvis using geometric models from a database and a free form deformation technique. The resulting 3D reconstruction converted into a finite element model was used to design and simulate the correction of a brace. The developed method was tested with data from ten scoliosis cases. The simulated correction was compared to analogous simulations performed with a 3D reconstruction built using two radiographs and surface topography (validated gold standard reference). There was an average difference of 1.4°/1.7° for the thoracic/lumbar Cobb angle, and 2.6°/5.5° for the kyphosis/lordosis between the developed reconstruction method and the reference. The average difference of the simulated correction was 2.8°/2.4° for the thoracic/lumbar Cobb angles and 3.5°/5.4° the kyphosis/lordosis. This study showed the feasibility to design and simulate brace corrections based on a new modeling method with a single coronal radiograph and surface topography. This innovative method could be used to improve brace designs, at a lesser radiation dose for the patient. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei
2018-04-01
We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Hsu, K. L.
2017-12-01
A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.
Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data
NASA Technical Reports Server (NTRS)
Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.
2002-01-01
This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.
Energy considerations in the Community Atmosphere Model (CAM)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...
2015-06-30
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
NASA Technical Reports Server (NTRS)
Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.
2013-01-01
The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led to an Earth System Model that will produce more accurate projections of future climate.
NASA Astrophysics Data System (ADS)
da Silva, Felipe das Neves Roque; Alves, José Luis Drummond; Cataldi, Marcio
2018-03-01
This paper aims to validate inflow simulations concerning the present-day climate at Água Vermelha Hydroelectric Plant (AVHP—located on the Grande River Basin) based on the Soil Moisture Accounting Procedure (SMAP) hydrological model. In order to provide rainfall data to the SMAP model, the RegCM regional climate model was also used working with boundary conditions from the MIROC model. Initially, present-day climate simulation performed by RegCM model was analyzed. It was found that, in terms of rainfall, the model was able to simulate the main patterns observed over South America. A bias correction technique was also used and it was essential to reduce mistakes related to rainfall simulation. Comparison between rainfall simulations from RegCM and MIROC showed improvements when the dynamical downscaling was performed. Then, SMAP, a rainfall-runoff hydrological model, was used to simulate inflows at Água Vermelha Hydroelectric Plant. After calibration with observed rainfall, SMAP simulations were evaluated in two different periods from the one used in calibration. During calibration, SMAP captures the inflow variability observed at AVHP. During validation periods, the hydrological model obtained better results and statistics with observed rainfall. However, in spite of some discrepancies, the use of simulated rainfall without bias correction captured the interannual flow variability. However, the use of bias removal in the simulated rainfall performed by RegCM brought significant improvements to the simulation of natural inflows performed by SMAP. Not only the curve of simulated inflow became more similar to the observed inflow, but also the statistics improved their values. Improvements were also noticed in the inflow simulation when the rainfall was provided by the regional climate model compared to the global model. In general, results obtained so far prove that there was an added value in rainfall when regional climate model was compared to global climate model and that data from regional models must be bias-corrected so as to improve their results.
Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations
NASA Astrophysics Data System (ADS)
Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan
2017-08-01
Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Nguyen, H. T.; Tran, H.
2018-03-01
In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.
A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style
ERIC Educational Resources Information Center
Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman
2016-01-01
The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…
Correction of ultrasonic wave aberration with a time delay and amplitude filter.
Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn
2003-04-01
Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Effect of attenuation correction on image quality in emission tomography
NASA Astrophysics Data System (ADS)
Denisova, N. V.; Ondar, M. M.
2017-10-01
In this paper, mathematical modeling and computer simulations of myocardial perfusion SPECT imaging are performed. The main factors affecting the quality of reconstructed images in SPECT are anatomical structures, the diastolic volume of a myocardium and attenuation of gamma rays. The purpose of the present work is to study the effect of attenuation correction on image quality in emission tomography. The basic 2D model describing a Tc-99m distribution in a transaxial slice of the thoracic part of a patient body was designed. This model was used to construct four phantoms simulated various anatomical shapes: 2 male and 2 female patients with normal, obese and subtle physique were included in the study. Data acquisition model which includes the effect of non-uniform attenuation, collimator-detector response and Poisson statistics was developed. The projection data were calculated for 60 views in accordance with the standard myocardial perfusion SPECT imaging protocol. Reconstructions of images were performed using the OSEM algorithm which is widely used in modern SPECT systems. Two types of patient's examination procedures were simulated: SPECT without attenuation correction and SPECT/CT with attenuation correction. The obtained results indicate a significant effect of the attenuation correction on the SPECT images quality.
A mass-conserving multiphase lattice Boltzmann model for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Niu, Xiao-Dong; Li, You; Ma, Yi-Ren; Chen, Mu-Feng; Li, Xiang; Li, Qiao-Zhong
2018-01-01
In this study, a mass-conserving multiphase lattice Boltzmann (LB) model is proposed for simulating the multiphase flows. The proposed model developed in the present study is to improve the model of Shao et al. ["Free-energy-based lattice Boltzmann model for simulation of multiphase flows with density contrast," Phys. Rev. E 89, 033309 (2014)] by introducing a mass correction term in the lattice Boltzmann model for the interface. The model of Shao et al. [(the improved Zheng-Shu-Chew (Z-S-C model)] correctly considers the effect of the local density variation in momentum equation and has an obvious improvement over the Zheng-Shu-Chew (Z-S-C) model ["A lattice Boltzmann model for multiphase flows with large density ratio," J. Comput. Phys. 218(1), 353-371 (2006)] in terms of solution accuracy. However, due to the physical diffusion and numerical dissipation, the total mass of each fluid phase cannot be conserved correctly. To solve this problem, a mass correction term, which is similar to the one proposed by Wang et al. ["A mass-conserved diffuse interface method and its application for incompressible multiphase flows with large density ratio," J. Comput. Phys. 290, 336-351 (2015)], is introduced into the lattice Boltzmann equation for the interface to compensate the mass losses or offset the mass increase. Meanwhile, to implement the wetting boundary condition and the contact angle, a geometric formulation and a local force are incorporated into the present mass-conserving LB model. The proposed model is validated by verifying the Laplace law, simulating both one and two aligned droplets splashing onto a liquid film, droplets standing on an ideal wall, droplets with different wettability splashing onto smooth wax, and bubbles rising under buoyancy. Numerical results show that the proposed model can correctly simulate multiphase flows. It was found that the mass is well-conserved in all cases considered by the model developed in the present study. The developed model has been found to perform better than the improved Z-S-C model in this aspect.
Extension of the Haseman-Elston regression model to longitudinal data.
Won, Sungho; Elston, Robert C; Park, Taesung
2006-01-01
We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Hink, R.
2015-09-01
The choice of materials for rocket chamber walls is limited by its thermal resistance. The thermal loads can be reduced substantially by the blowing out of gases through a porous surface. The k- ω-based turbulence models for computational fluid dynamic simulations are designed for smooth, non-permeable walls and have to be adjusted to account for the influence of injected fluids. Wilcox proposed therefore an extension for the k- ω turbulence model for the correct prediction of turbulent boundary layer velocity profiles. In this study, this extension is validated against experimental thermal boundary layer data from the Thermosciences Division of the Department of Mechanical Engineering from the Stanford University. All simulations are performed with a finite volume-based in-house code of the German Aerospace Center. Several simulations with different blowing settings were conducted and discussed in comparison to the results of the original model and in comparison to an additional roughness implementation. This study has permitted to understand that velocity profile corrections are necessary in contrast to additional roughness corrections to predict the correct thermal boundary layer profile of effusive cooled walls. Finally, this approach is applied to a two-dimensional simulation of an effusive cooled rocket chamber wall.
Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Yu, Zhiping; Dutton, Robert W.; Ancona, Mario G.; Saini, Subhash (Technical Monitor)
1998-01-01
We describe an electronic transport model and an implementation approach that respond to the challenges of device modeling for gigascale integration. We use the density-gradient (DG) transport model, which adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We present the current implementation of the DG model in PROPHET, a partial differential equation solver developed by Lucent Technologies. This implementation approach permits rapid development and enhancement of models, as well as run-time modifications and model switching. We show that even in typical bulk transport devices such as P-N diodes and BJTs, DG quantum effects can significantly modify the I-V characteristics. Quantum effects are shown to be even more significant in small, surface transport devices, such as sub-0.1 micron MOSFETs. In thin-oxide MOS capacitors, we find that quantum effects may reduce gate capacitance by 25% or more. The inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements. Significant quantum corrections also occur in the I-V characteristics of short-channel MOSFETs due to the gate capacitance correction.
Fast Magnetotail Reconnection: Challenge to Global MHD Modeling
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.; Hesse, M.; Rastaetter, L.; Toth, G.; de Zeeuw, D.; Gombosi, T.
2005-05-01
Representation of fast magnetotail reconnection rates during substorm onset is one of the major challenges to global MHD modeling. Our previous comparative study of collisionless magnetic reconnection in GEM Challenge geometry demonstrated that the reconnection rate is controlled by ion nongyrotropic behavior near the reconnection site and that it can be described in terms of nongyrotropic corrections to the magnetic induction equation. To further test the approach we performed MHD simulations with nongyrotropic corrections of forced reconnection for the Newton Challenge setup. As a next step we employ the global MHD code BATSRUS and test different methods to model fast magnetotail reconnection rates by introducing non-ideal corrections to the induction equation in terms of nongyrotropic corrections, spatially localized resistivity, or current dependent resistivity. The BATSRUS adaptive grid structure allows to perform global simulations with spatial resolution near the reconnection site comparable with spatial resolution of local MHD simulations for the Newton Challenge. We select solar wind conditions which drive the accumulation of magnetic field in the tail lobes and subsequent magnetic reconnection and energy release. Testing the ability of global MHD models to describe magnetotail evolution during substroms is one of the elements of science based validation efforts at the Community Coordinated Modeling Center.
Entity Modeling and Immersive Decision Environments
2011-09-01
Simulation Technologies (REST) Lerman, D. J. (2010). Correct Weather Modeling of non-Standard Days (10F- SIW -004). In Proceedings of 2010 Fall Simulation...Interoperability Workshop (Fall SIW ) SISO. Orlando, FL: SISO. Most flight simulators compute and fly in a weather environment that matches a
O'Doherty, Jim; Chilcott, Anna; Dunn, Joel
2015-11-01
Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Petition for the US EPA to correct information concerning motor vehicle fiel emissions represented in the Motor Vehicle Emissions Simulator model (MOVES2014) and the EPAct/V2/E-89 fuel effects study (EPAct study)1 on which it is based
Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan
2013-01-01
A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426
Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models
ERIC Educational Resources Information Center
Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.
2016-01-01
This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling
NASA Astrophysics Data System (ADS)
Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang
2018-04-01
Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.
NASA Astrophysics Data System (ADS)
Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.
2017-12-01
Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.
Combining Statistics and Physics to Improve Climate Downscaling
NASA Astrophysics Data System (ADS)
Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.
2017-12-01
Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
NASA Astrophysics Data System (ADS)
Liersch, Stefan; Tecklenburg, Julia; Rust, Henning; Dobler, Andreas; Fischer, Madlen; Kruschke, Tim; Koch, Hagen; Fokko Hattermann, Fred
2018-04-01
Climate simulations are the fuel to drive hydrological models that are used to assess the impacts of climate change and variability on hydrological parameters, such as river discharges, soil moisture, and evapotranspiration. Unlike with cars, where we know which fuel the engine requires, we never know in advance what unexpected side effects might be caused by the fuel we feed our models with. Sometimes we increase the fuel's octane number (bias correction) to achieve better performance and find out that the model behaves differently but not always as was expected or desired. This study investigates the impacts of projected climate change on the hydrology of the Upper Blue Nile catchment using two model ensembles consisting of five global CMIP5 Earth system models and 10 regional climate models (CORDEX Africa). WATCH forcing data were used to calibrate an eco-hydrological model and to bias-correct both model ensembles using slightly differing approaches. On the one hand it was found that the bias correction methods considerably improved the performance of average rainfall characteristics in the reference period (1970-1999) in most of the cases. This also holds true for non-extreme discharge conditions between Q20 and Q80. On the other hand, bias-corrected simulations tend to overemphasize magnitudes of projected change signals and extremes. A general weakness of both uncorrected and bias-corrected simulations is the rather poor representation of high and low flows and their extremes, which were often deteriorated by bias correction. This inaccuracy is a crucial deficiency for regional impact studies dealing with water management issues and it is therefore important to analyse model performance and characteristics and the effect of bias correction, and eventually to exclude some climate models from the ensemble. However, the multi-model means of all ensembles project increasing average annual discharges in the Upper Blue Nile catchment and a shift in seasonal patterns, with decreasing discharges in June and July and increasing discharges from August to November.
NASA Astrophysics Data System (ADS)
Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio
2010-05-01
Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.
Modeling boundary measurements of scattered light using the corrected diffusion approximation
Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.
2012-01-01
We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102
Off-the-job training for VATS employing anatomically correct lung models.
Obuchi, Toshiro; Imakiire, Takayuki; Miyahara, Sou; Nakashima, Hiroyasu; Hamanaka, Wakako; Yanagisawa, Jun; Hamatake, Daisuke; Shiraishi, Takeshi; Moriyama, Shigeharu; Iwasaki, Akinori
2012-02-01
We evaluated our simulated major lung resection employing anatomically correct lung models as "off-the-job training" for video-assisted thoracic surgery trainees. A total of 76 surgeons voluntarily participated in our study. They performed video-assisted thoracic surgical lobectomy employing anatomically correct lung models, which are made of sponges so that vessels and bronchi can be cut using usual surgical techniques with typical forceps. After the simulation surgery, participants answered questionnaires on a visual analogue scale, in terms of their level of interest and the reality of our training method as off-the-job training for trainees. We considered that the closer a score was to 10, the more useful our method would be for training new surgeons. Regarding the appeal or level of interest in this simulation surgery, the mean score was 8.3 of 10, and regarding reality, it was 7.0. The participants could feel some of the real sensations of the surgery and seemed to be satisfied to perform the simulation lobectomy. Our training method is considered to be suitable as an appropriate type of surgical off-the-job training.
NASA Astrophysics Data System (ADS)
Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.
2018-01-01
In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.
NASA Astrophysics Data System (ADS)
Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik
2018-03-01
Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.
Fast ray-tracing of human eye optics on Graphics Processing Units.
Wei, Qi; Patkar, Saket; Pai, Dinesh K
2014-05-01
We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A density-adaptive SPH method with kernel gradient correction for modeling explosive welding
NASA Astrophysics Data System (ADS)
Liu, M. B.; Zhang, Z. L.; Feng, D. L.
2017-09-01
Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.
Simple liquid models with corrected dielectric constants
Fennell, Christopher J.; Li, Libo; Dill, Ken A.
2012-01-01
Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577
Can climate models be tuned to simulate the global mean absolute temperature correctly?
NASA Astrophysics Data System (ADS)
Duan, Q.; Shi, Y.; Gong, W.
2016-12-01
The Inter-government Panel on Climate Change (IPCC) has already issued five assessment reports (ARs), which include the simulation of the past climate and the projection of the future climate under various scenarios. The participating models can simulate reasonably well the trend in global mean temperature change, especially of the last 150 years. However, there is a large, constant discrepancy in terms of global mean absolute temperature simulations over this period. This discrepancy remained in the same range between IPCC-AR4 and IPCC-AR5, which amounts to about 3oC between the coldest model and the warmest model. This discrepancy has great implications to the land processes, particularly the processes related to the cryosphere, and casts doubts over if land-atmosphere-ocean interactions are correctly considered in those models. This presentation aims to explore if this discrepancy can be reduced through model tuning. We present an automatic model calibration strategy to tune the parameters of a climate model so the simulated global mean absolute temperature would match the observed data over the last 150 years. An intermediate complexity model known as LOVECLIM is used in the study. This presentation will show the preliminary results.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
Inter-model Diversity of ENSO simulation and its relation to basic states
NASA Astrophysics Data System (ADS)
Kug, J. S.; Ham, Y. G.
2016-12-01
In this study, a new methodology is developed to improve the climate simulation of state-of-the-art coupledglobal climate models (GCMs), by a postprocessing based on the intermodel diversity. Based on the closeconnection between the interannual variability and climatological states, the distinctive relation between theintermodel diversity of the interannual variability and that of the basic state is found. Based on this relation,the simulated interannual variabilities can be improved, by correcting their climatological bias. To test thismethodology, the dominant intermodel difference in precipitation responses during El Niño-SouthernOscillation (ENSO) is investigated, and its relationship with climatological state. It is found that the dominantintermodel diversity of the ENSO precipitation in phase 5 of the Coupled Model Intercomparison Project(CMIP5) is associated with the zonal shift of the positive precipitation center during El Niño. This dominantintermodel difference is significantly correlated with the basic states. The models with wetter (dryer) climatologythan the climatology of the multimodel ensemble (MME) over the central Pacific tend to shift positiveENSO precipitation anomalies to the east (west). Based on the model's systematic errors in atmosphericENSO response and bias, the models with better climatological state tend to simulate more realistic atmosphericENSO responses.Therefore, the statistical method to correct the ENSO response mostly improves the ENSO response. Afterthe statistical correction, simulating quality of theMMEENSO precipitation is distinctively improved. Theseresults provide a possibility that the present methodology can be also applied to improving climate projectionand seasonal climate prediction.
USDA-ARS?s Scientific Manuscript database
Accurately predicting phenology in crop simulation models is critical for correctly simulating crop production. While extensive work in modeling phenology has focused on the temperature response function (resulting in robust phenology models), limited work on quantifying the phenological responses t...
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
A Dynamical Downscaling Approach with GCM Bias Corrections and Spectral Nudging
NASA Astrophysics Data System (ADS)
Xu, Z.; Yang, Z.
2013-12-01
To reduce the biases in the regional climate downscaling simulations, a dynamical downscaling approach with GCM bias corrections and spectral nudging is developed and assessed over North America. Regional climate simulations are performed with the Weather Research and Forecasting (WRF) model embedded in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). To reduce the GCM biases, the GCM climatological means and the variances of interannual variations are adjusted based on the National Centers for Environmental Prediction-NCAR global reanalysis products (NNRP) before using them to drive WRF which is the same as our previous method. In this study, we further introduce spectral nudging to reduce the RCM-based biases. Two sets of WRF experiments are performed with and without spectral nudging. All WRF experiments are identical except that the initial and lateral boundary conditions are derived from the NNRP, the original GCM output, and the bias corrected GCM output, respectively. The GCM-driven RCM simulations with bias corrections and spectral nudging (IDDng) are compared with those without spectral nudging (IDD) and North American Regional Reanalysis (NARR) data to assess the additional reduction in RCM biases relative to the IDD approach. The results show that the spectral nudging introduces the effect of GCM bias correction into the RCM domain, thereby minimizing the climate drift resulting from the RCM biases. The GCM bias corrections and spectral nudging significantly improve the downscaled mean climate and extreme temperature simulations. Our results suggest that both GCM bias corrections or spectral nudging are necessary to reduce the error of downscaled climate. Only one of them does not guarantee better downscaling simulation. The new dynamical downscaling method can be applied to regional projection of future climate or downscaling of GCM sensitivity simulations. Annual mean RMSEs. The RMSEs are computed over the verification region by monthly mean data over 1981-2010. Experimental design
The origin of consistent protein structure refinement from structural averaging.
Park, Hahnbeom; DiMaio, Frank; Baker, David
2015-06-02
Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state. Copyright © 2015 Elsevier Ltd. All rights reserved.
Structures and Intermittency in a Passive Scalar Model
NASA Astrophysics Data System (ADS)
Vergassola, M.; Mazzino, A.
1997-09-01
Perturbative expansions for intermittency scaling exponents in the Kraichnan passive scalar model [Phys. Rev. Lett. 72, 1016 (1994)] are investigated. A one-dimensional compressible model is considered for this purpose. High resolution Monte Carlo simulations using an Ito approach adapted to an advecting velocity field with a very short correlation time are performed and lead to clean scaling behavior for passive scalar structure functions. Perturbative predictions for the scaling exponents around the Gaussian limit of the model are derived as in the Kraichnan model. Their comparison with the simulations indicates that the scale-invariant perturbative scheme correctly captures the inertial range intermittency corrections associated with the intense localized structures observed in the dynamics.
Methodological challenges to bridge the gap between regional climate and hydrology models
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido
2017-04-01
The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the boundaries by the Community Earth System Model (CESM). The raw simulation driven by CESM exhibit prominent biases that stand out in the evolution of the annual cycle and demonstrate that the correction of biases is mandatory in this type of studies, rather than a minor correction that might be neglected. The simulation spans the period 1976 - 2005, although the application of the correction is carried out on a daily basis. Both methods lead to a corrected field of precipitation that respects the temporal evolution of the simulated precipitation, at the same time that mimics the distribution of precipitation according to the one in the observations. Due to the nature of the two methodologies, there are important differences between the products of both corrections, that lead to dataset with different properties. FIT is generally more accurate regarding the reproduction of the tails of the distribution, i.e. extreme events, whereas the nature of QM renders it a general-purpose correction whose skill is equally distributed across the full distribution of precipitation, including central values.
NASA Astrophysics Data System (ADS)
Kim, Go-Un; Seo, Kyong-Hwan
2018-01-01
A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.
Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method
NASA Astrophysics Data System (ADS)
Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil
2014-05-01
Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
NASA Astrophysics Data System (ADS)
Konik, Arda Bekir
Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.
Ion radial diffusion in an electrostatic impulse model for stormtime ring current formation
NASA Technical Reports Server (NTRS)
Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.; Gorney, David J.
1992-01-01
Two refinements to the quasi-linear theory of ion radial diffusion are proposed and examined analytically with simulations of particle trajectories. The resonance-broadening correction by Dungey (1965) is applied to the quasi-linear diffusion theory by Faelthammar (1965) for an individual model storm. Quasi-linear theory is then applied to the mean diffusion coefficients resulting from simulations of particle trajectories in 20 model storms. The correction for drift-resonance broadening results in quasi-linear diffusion coefficients with discrepancies from the corresponding simulated values that are reduced by a factor of about 3. Further reductions in the discrepancies are noted following the averaging of the quasi-linear diffusion coefficients, the simulated coefficients, and the resonance-broadened coefficients for the 20 storms. Quasi-linear theory provides good descriptions of particle transport for a single storm but performs even better in conjunction with the present ensemble-averaging.
Solar Sail Spaceflight Simulation
NASA Technical Reports Server (NTRS)
Lisano, Michael; Evans, James; Ellis, Jordan; Schimmels, John; Roberts, Timothy; Rios-Reyes, Leonel; Scheeres, Daniel; Bladt, Jeff; Lawrence, Dale; Piggott, Scott
2007-01-01
The Solar Sail Spaceflight Simulation Software (S5) toolkit provides solar-sail designers with an integrated environment for designing optimal solar-sail trajectories, and then studying the attitude dynamics/control, navigation, and trajectory control/correction of sails during realistic mission simulations. Unique features include a high-fidelity solar radiation pressure model suitable for arbitrarily-shaped solar sails, a solar-sail trajectory optimizer, capability to develop solar-sail navigation filter simulations, solar-sail attitude control models, and solar-sail high-fidelity force models.
NASA Astrophysics Data System (ADS)
Mehan, S.; Gitau, M. W.
2017-12-01
Global circulation models are often used in simulating long-term climate data for use in hydrologic studies. However, some bias (difference between simulated values and observed data) has been observed especially while simulating precipitation events. The bias is especially evident with respect to simulating dry and wet days. This is because GCMs tend to underestimate large precipitation events with the associated precipitation amounts being distributed to some dry days, thus, leading to a larger number of wet days each with some amount of rainfall. The accuracy of precipitation simulations impacts the accuracy of other simulated components such as flow and water quality. It is, thus, very important to correct the bias associated with precipitation before it is used for any modeling applications. This study aims to correct the bias specifically associated with precipitation events with a focus on the Western Lake Erie Basin (WLEB). Analytical, statistical, and extreme event analyses for three different stations (Adrian, MI; Norwalk, OH; and Fort Wayne, IN) in the WLEB were carried out to quantify the bias. Findings indicated that GCMs overestimated the wet sequences and underestimated dry day probabilities. The number of wet sequences simulated by nine GCMs each from two different open sources were 310-678 (Fort Wayne, IN); 318-600 (Adrian, MI); and 346-638 (Norwalk, OH) compared with 166, 150, and 180, respectively. Predicted conditional probabilities of a dry day followed by wet day (P (D|W)) ranged between 0.16-0.42 (Fort Wayne, IN); 0.29-0.41(Adrian, MI); and 0.13-0.40 (Norwalk, OH) from the different GCMs compared to 0.52 (Fort Wayne, IN and Norwalk, OH); and 0.54 (Adrian, MI) from the observed climate data. There was a difference of 0-8.5% between the distribution of simulated climate values and observed climate data for precipitation and temperature for all three stations (Cohen's d effective size < 0.2). Further work involves the use of Stochastic Weather Generators to correct the conditional probabilities and better capture the dry and wet events for use in the hydrologic and water resources modeling.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2005-04-01
The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.
Vertebral derotation in adolescent idiopathic scoliosis causes hypokyphosis of the thoracic spine
2012-01-01
Background The purpose of this study was to test the hypothesis that direct vertebral derotation by pedicle screws (PS) causes hypokyphosis of the thoracic spine in adolescent idiopathic scoliosis (AIS) patients, using computer simulation. Methods Twenty AIS patients with Lenke type 1 or 2 who underwent posterior correction surgeries using PS were included in this study. Simulated corrections of each patient’s scoliosis, as determined by the preoperative CT scan data, were performed on segmented 3D models of the whole spine. Two types of simulated extreme correction were performed: 1) complete coronal correction only (C method) and 2) complete coronal correction with complete derotation of vertebral bodies (C + D method). The kyphosis angle (T5-T12) and vertebral rotation angle at the apex were measured before and after the simulated corrections. Results The mean kyphosis angle after the C + D method was significantly smaller than that after the C method (2.7 ± 10.0° vs. 15.0 ± 7.1°, p < 0.01). The mean preoperative apical rotation angle of 15.2 ± 5.5° was completely corrected after the C + D method (0°) and was unchanged after the C method (17.6 ± 4.2°). Conclusions In the 3D simulation study, kyphosis was reduced after complete correction of the coronal and rotational deformity, but it was maintained after the coronal-only correction. These results proved the hypothesis that the vertebral derotation obtained by PS causes hypokyphosis of the thoracic spine. PMID:22691717
Visual Predictive Check in Models with Time-Varying Input Function.
Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio
2015-11-01
The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.
Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model
NASA Technical Reports Server (NTRS)
Biegal, Bryan A.; Rafferty, Connor S.; Yu, Zhiping; Ancona, Mario G.; Dutton, Robert W.; Saini, Subhash (Technical Monitor)
1998-01-01
The continued down-scaling of electronic devices, in particular the commercially dominant MOSFET, will force a fundamental change in the process of new electronics technology development in the next five to ten years. The cost of developing new technology generations is soaring along with the price of new fabrication facilities, even as competitive pressure intensifies to bring this new technology to market faster than ever before. To reduce cost and time to market, device simulation must become a more fundamental, indeed dominant, part of the technology development cycle. In order to produce these benefits, simulation accuracy must improve markedly. At the same time, device physics will become more complex, with the rapid increase in various small-geometry and quantum effects. This work describes both an approach to device simulator development and a physical model which advance the effort to meet the tremendous electronic device simulation challenge described above. The device simulation approach is to specify the physical model at a high level to a general-purpose (but highly efficient) partial differential equation solver (in this case PROPHET, developed by Lucent Technologies), which then simulates the model in 1-D, 2-D, or 3-D for a specified device and test regime. This approach allows for the rapid investigation of a wide range of device models and effects, which is certainly essential for device simulation to catch up with, and then stay ahead of, electronic device technology of the present and future. The physical device model used in this work is the density-gradient (DG) quantum correction to the drift-diffusion model [Ancona, Phys. Rev. B 35(5), 7959 (1987)]. This model adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We used the DG model in 1-D and 2-D (for the first time) to simulate both bipolar and unipolar devices. Simulations of heavily-doped, short-base diodes indicated that the DG quantum corrections do not have a large effect on the IN characteristics of electronic devices without heteroj unction s. On the other hand, ultra-small MOSFETs certainly exhibit important quantum effects that the DG model will include: quantum repulsion of the inversion and gate charges from the oxide interfaces, and quantum tunneling through thin gate oxides. We present initial results of 2-D DG simulations of ultra-small MOSFETs. Subtle but important issues involving the specification of the model, boundary conditions, and interface constraints for DG simulation of MOSFETs will also be illuminated.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
Phase aberration simulation study of MRgFUS breast treatments
Farrer, Alexis I.; Almquist, Scott; Dillon, Christopher R.; Neumayer, Leigh A.; Parker, Dennis L.; Christensen, Douglas A.; Payne, Allison
2016-01-01
Purpose: This simulation study evaluates the effects of phase aberration in breast MR-guided focused ultrasound (MRgFUS) ablation treatments performed with a phased-array transducer positioned laterally to the breast. A quantification of these effects in terms of thermal dose delivery and the potential benefits of phase correction is demonstrated in four heterogeneous breast numerical models. Methods: To evaluate the effects of varying breast tissue properties on the quality of the focus, four female volunteers with confirmed benign fibroadenomas were imaged using 3T MRI. These images were segmented into numerical models with six tissue types, with each tissue type assigned standard acoustic properties from the literature. Simulations for a single-plane 16-point raster-scan treatment trajectory centered in a fibroadenoma in each modeled breast were performed for a breast-specific MRgFUS system. At each of the 16 points, pressure patterns both with and without applying a phase correction technique were determined with the hybrid-angular spectrum method. Corrected phase patterns were obtained using a simulation-based phase aberration correction technique to adjust each element’s transmit phase to obtain maximized constructive interference at the desired focus. Thermal simulations were performed for both the corrected and uncorrected pressure patterns using a finite-difference implementation of the Pennes bioheat equation. The effect of phase correction was evaluated through comparison of thermal dose accumulation both within and outside a defined treatment volume. Treatment results using corrected and uncorrected phase aberration simulations were compared by evaluating the power required to achieve a 20 °C temperature rise at the first treatment location. The extent of the volumes that received a minimum thermal dose of 240 CEM at 43 °C inside the intended treatment volume as well as the volume in the remaining breast tissues was also evaluated in the form of a dose volume ratio (DVR), a DVR percent change between corrected and uncorrected phases, and an additional metric that measured phase spread. Results: With phase aberration correction applied, there was an improvement in the focus for all breast anatomies as quantified by a reduction in power required (13%–102%) to reach 20 °C when compared to uncorrected simulations. Also, the DVR percent change increased by 5%–77% in seven out of eight cases, indicating an improvement to the treatment as measured by a reduction in thermal dose deposited to the nontreatment tissues. Breast compositions with a higher degree of heterogeneity along the ultrasound beam path showed greater reductions in thermal dose delivered outside of the treatment volume with correction applied than beam trajectories that propagated through more homogeneous breast compositions. An increasing linear trend was observed between the DVR percent change and the phase-spread metric (R2 = 0.68). Conclusions: These results indicate that performing phase aberration correction for breast MRgFUS treatments is beneficial for the small-aperture transducer (14.4 × 9.8 cm) evaluated in this work. While all breast anatomies could benefit from phase aberration correction, greater benefits are observed in more heterogeneous anatomies. PMID:26936722
Kyriakou, Adamos; Neufeld, Esra; Werner, Beat; Székely, Gábor; Kuster, Niels
2015-01-01
Transcranial focused ultrasound (tcFUS) is an attractive noninvasive modality for neurosurgical interventions. The presence of the skull, however, compromises the efficiency of tcFUS therapy, as its heterogeneous nature and acoustic characteristics induce significant distortion of the acoustic energy deposition, focal shifts, and thermal gain decrease. Phased-array transducers allow for partial compensation of skull-induced aberrations by application of precalculated phase and amplitude corrections. An integrated numerical framework allowing for 3D full-wave, nonlinear acoustic and thermal simulations has been developed and applied to tcFUS. Simulations were performed to investigate the impact of skull aberrations, the possibility of extending the treatment envelope, and adverse secondary effects. The simulated setup comprised an idealized model of the ExAblate Neuro and a detailed MR-based anatomical head model. Four different approaches were employed to calculate aberration corrections (analytical calculation of the aberration corrections disregarding tissue heterogeneities; a semi-analytical ray-tracing approach compensating for the presence of the skull; two simulation-based time-reversal approaches with and without pressure amplitude corrections which account for the entire anatomy). These impact of these approaches on the pressure and temperature distributions were evaluated for 22 brain-targets. While (semi-)analytical approaches failed to induced high pressure or ablative temperatures in any but the targets in the close vicinity of the geometric focus, simulation-based approaches indicate the possibility of considerably extending the treatment envelope (including targets below the transducer level and locations several centimeters off the geometric focus), generation of sharper foci, and increased targeting accuracy. While the prediction of achievable aberration correction appears to be unaffected by the detailed bone-structure, proper consideration of inhomogeneity is required to predict the pressure distribution for given steering parameters. Simulation-based approaches to calculate aberration corrections may aid in the extension of the tcFUS treatment envelope as well as predict and avoid secondary effects (standing waves, skull heating). Due to their superior performance, simulationbased techniques may prove invaluable in the amelioration of skull-induced aberration effects in tcFUS therapy. The next steps are to investigate shear-wave-induced effects in order to reliably exclude secondary hot-spots, and to develop comprehensive uncertainty assessment and validation procedures.
Theoretical prediction of crystallization kinetics of a supercooled Lennard-Jones fluid
NASA Astrophysics Data System (ADS)
Gunawardana, K. G. S. H.; Song, Xueyu
2018-05-01
The first order curvature correction to the crystal-liquid interfacial free energy is calculated using a theoretical model based on the interfacial excess thermodynamic properties. The correction parameter (δ), which is analogous to the Tolman length at a liquid-vapor interface, is found to be 0.48 ± 0.05 for a Lennard-Jones (LJ) fluid. We show that this curvature correction is crucial in predicting the nucleation barrier when the size of the crystal nucleus is small. The thermodynamic driving force (Δμ) corresponding to available simulated nucleation conditions is also calculated by combining the simulated data with a classical density functional theory. In this paper, we show that the classical nucleation theory is capable of predicting the nucleation barrier with excellent agreement to the simulated results when the curvature correction to the interfacial free energy is accounted for.
NASA Astrophysics Data System (ADS)
Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman
2013-07-01
In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
Rapid Automated Aircraft Simulation Model Updating from Flight Data
NASA Technical Reports Server (NTRS)
Brian, Geoff; Morelli, Eugene A.
2011-01-01
Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Research of laser echo signal simulator
NASA Astrophysics Data System (ADS)
Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou
2015-11-01
Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.
NASA Astrophysics Data System (ADS)
Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.
2016-01-01
In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C
Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less
Modelling geomorphic responses to human perturbations: Application to the Kander river, Switzerland
NASA Astrophysics Data System (ADS)
Ramirez, Jorge; Zischg, Andreas; Schürmann, Stefan; Zimmermann, Markus; Weingartner, Rolf; Coulthard, Tom; Keiler, Margreth
2017-04-01
Before 1714 the Kander river (Switzerland) flowed into the Aare river causing massive flooding and for this reason the Kander river was deviated (Kander correction) to lake Thun. The Kander correction was a pioneering hydrological project and induced a major human change to the landscape, but had unintended hydrological and geomorphic impacts that cascaded upstream and downstream. For example doubling the catchment area of Lake Thun, which gave rise to major flood problems, cessation of direct sediment delivery to the Aare, and sediment flux to lake Thun forming the Kander delta. More importantly the Kander correction shortened the Kander river and substantially increased the slope and bed shear of the Kander upstream from the correction. Consequently impacts of the correction cascaded upstream as a migrating knickpoint and eroded the river channel at unprecedented rates. Today we may have at our disposal the theoretical and empirical foundations to foresee the consequences of human intervention into natural systems. One method to investigate such geomorphic changes are numerical models that estimate the evolution of rivers by simulating the movement of water and sediment. Although much progress has been made in the development of these geomorphic models, few models have been tested in circumstances with rare perturbations and extreme forcings. As such, it remains uncertain if geomorphic models are useful and stable in extreme situations that include large movements of sediment and water. Here, in this study, we use historic maps and documents to develop a detailed geomorphic model of the Kander river starting in the year 1714. We use this model to simulate the extreme geomorphic events that preceded the deviation of the Kander river into Lake Thun and simulate changes to the river until conditions become relatively stable. We test our model by replicating long term impacts to the river that include 1) rates of incision within the correction, 2) knickpoint migration, and 3) delta formation in Lake Thun. In doing this we build confidence in the model and gain understanding of how the river system responded to anthropogenic perturbations.
NASA Astrophysics Data System (ADS)
Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona
2018-01-01
Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.
Ho, Cheng-Ting; Lin, Hsiu-Hsia; Liou, Eric J. W.; Lo, Lun-Jou
2017-01-01
Traditional planning method for orthognathic surgery has limitations of cephalometric analysis, especially for patients with asymmetry. The aim of this study was to assess surgical plan modification after 3-demensional (3D) simulation. The procedures were to perform traditional surgical planning, construction of 3D model for the initial surgical plan (P1), 3D model of altered surgical plan after simulation (P2), comparison between P1 and P2 models, surgical execution, and postoperative validation using superimposition and root-mean-square difference (RMSD) between postoperative 3D image and P2 simulation model. Surgical plan was modified after 3D simulation in 93% of the cases. Absolute linear changes of landmarks in mediolateral direction (x-axis) were significant and between 1.11 to 1.62 mm. The pitch, yaw, and roll rotation as well as ramus inclination correction also showed significant changes after the 3D planning. Yaw rotation of the maxillomandibular complex (1.88 ± 0.32°) and change of ramus inclination (3.37 ± 3.21°) were most frequently performed for correction of the facial asymmetry. Errors between the postsurgical image and 3D simulation were acceptable, with RMSD 0.63 ± 0.25 mm for the maxilla and 0.85 ± 0.41 mm for the mandible. The information from this study could be used to augment the clinical planning and surgical execution when a conventional approach is applied. PMID:28071714
Global MHD Simulation of the Coronal Mass Ejection on 2011 March 7: from Chromosphere to 1 AU
NASA Astrophysics Data System (ADS)
Jin, M.; Manchester, W.; van der Holst, B.; Oran, R.; Sokolov, I.; Toth, G.; Vourlidas, A.; Liu, Y.; Sun, X.; Gombosi, T. I.
2013-12-01
In this study, we present magnetohydrodynamics simulation results of a fast CME event that occurred on 2011 March 7 by using the newly developed Alfven Wave Solar Model (AWSoM) in Space Weather Modeling Framework (SWMF). The background solar wind is driven by Alfven-wave pressure and heated by Alfven-wave dissipation in which we have incorporated balanced turbulence at the top of the closed field lines. The magnetic field of the inner boundary is specified with a synoptic magnetogram from SDO/HMI. In order to produce the physically correct CME structures and CME-driven shocks, the electron and proton temperatures are separated so that the electron heat conduction is explicitly treated in conjunction with proton shock heating. Also, collisionless heat conduction is implemented for getting the correct electron temperature at 1 AU. We initiate the CME by using the Gibson-Low flux rope model and simulate the CME propagation to 1 AU. A comprehensive validation study is performed using remote as well as in-situ observations from SOHO, STEREOA/B, ACE, and WIND. Our result shows that the new model can reproduce most of the observed features and the arrival time of the CME is correctly estimated, which suggests the forecasting capability of the new model. We also examine the simulated CME-driven shock structures that are important for modeling the associated solar energetic event (SEP) with diffusive shock acceleration.
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
A Study of Two-Equation Turbulence Models on the Elliptic Streamline Flow
NASA Technical Reports Server (NTRS)
Blaisdell, Gregory A.; Qin, Jim H.; Shariff, Karim; Rai, Man Mohan (Technical Monitor)
1995-01-01
Several two-equation turbulence models are compared to data from direct numerical simulations (DNS) of the homogeneous elliptic streamline flow, which combines rotation and strain. The models considered include standard two-equation models and models with corrections for rotational effects. Most of the rotational corrections modify the dissipation rate equation to account for the reduced dissipation rate in rotating turbulent flows, however, the DNS data shows that the production term in the turbulent kinetic energy equation is not modeled correctly by these models. Nonlinear relations for the Reynolds stresses are considered as a means of modifying the production term. Implications for the modeling of turbulent vortices will be discussed.
NASA Astrophysics Data System (ADS)
Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank
2017-03-01
Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.
High-resolution dynamical downscaling of the future Alpine climate
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph
2017-04-01
The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.
Aeroelastic modeling for the FIT team F/A-18 simulation
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Wieseman, Carol D.
1989-01-01
Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.
NASA Astrophysics Data System (ADS)
Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertania, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2017-06-01
The charged particle densities obtained from CORSIKA simulated EAS, using the QGSJet-II.04 hadronic interaction model are used for primary energy reconstruction. Simulated data are reconstructed by using Lateral Energy Correction Functions computed with a new realistic model of the Grande stations implemented in Geant4.10.
NASA Astrophysics Data System (ADS)
Stefanova, L. B.
2013-12-01
Climate model evaluation is frequently performed as a first step in analyzing climate change simulations. Atmospheric scientists are accustomed to evaluating climate models through the assessment of model climatology and biases, the models' representation of large-scale modes of variability (such as ENSO, PDO, AMO, etc) and the relationship between these modes and local variability (e.g. the connection between ENSO and the wintertime precipitation in the Southeast US). While these provide valuable information about the fidelity of historical and projected climate model simulations from an atmospheric scientist's point of view, the application of climate model data to fields such as agriculture, ecology and biology may require additional analyses focused on the particular application's requirements and sensitivities. Typically, historical climate simulations are used to determine a mapping between the model and observed climate, either through a simple (additive for temperature or multiplicative for precipitation) or a more sophisticated (such as quantile matching) bias correction on a monthly or seasonal time scale. Plants, animals and humans however are not directly affected by monthly or seasonal means. To assess the impact of projected climate change on living organisms and related industries (e.g. agriculture, forestry, conservation, utilities, etc.), derivative measures such as the heating degree-days (HDD), cooling degree-days (CDD), growing degree-days (GDD), accumulated chill hours (ACH), wet season onset (WSO) and duration (WSD), among others, are frequently useful. We will present a comparison of the projected changes in such derivative measures calculated by applying: (a) the traditional temperature/precipitation bias correction described above versus (b) a bias correction based on the mapping between the historical model and observed derivative measures themselves. In addition, we will present and discuss examples of various application-based climate model evaluations, such as: (a) agricultural crop yield estimates and (b) species population viability estimates modeled using observed climate data vs. historical climate simulations.
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, William; Hanson, Paul J; Swanston, Chris
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
a New Framework for Characterising Simulated Droughts for Future Climates
NASA Astrophysics Data System (ADS)
Sharma, A.; Rashid, M.; Johnson, F.
2017-12-01
Significant attention has been focussed on metrics for quantifying drought. Lesser attention has been given to the unsuitability of current metrics in quantifying drought in a changing climate due to the clear non-stationarity in potential and actual evapotranspiration well into the future (Asadi-Zarch et al, 2015). This talk presents a new basis for simulating drought designed specifically for use with climate model simulations. Given the known uncertainty of climate model rainfall simulations, along with their inability to represent low-frequency variability attributes, the approach here adopts a predictive model for drought using selected atmospheric indicators. This model is based on a wavelet decomposition of relevant atmospheric predictors to filter out less relevant frequencies and formulate a better characterisation of the drought metric chosen as response. Once ascertained using observed precipication and associated atmospheric variables, these can be formulated from GCM simulations using a multivariate bias correction tool (Mehrotra and Sharma, 2016) that accounts for low-frequency variability, and a regression tool that accounts for nonlinear dependence (Sharma and Mehrotra, 2014). Use of only the relevant frequencies, as well as the corrected representation of cross-variable dependence, allows greater accuracy in characterising observed drought, from GCM simulations. Using simulations from a range of GCMs across Australia, we show here that this new method offers considerable advantages in representing drought compared to traditionally followed alternatives that rely on modelled rainfall instead. Reference:Asadi Zarch, M. A., B. Sivakumar, and A. Sharma (2015), Droughts in a warming climate: A global assessment of Standardized precipitation index (SPI) and Reconnaissance drought index (RDI), Journal of Hydrology, 526, 183-195. Mehrotra, R., and A. Sharma (2016), A Multivariate Quantile-Matching Bias Correction Approach with Auto- and Cross-Dependence across Multiple Time Scales: Implications for Downscaling, Journal of Climate, 29(10), 3519-3539. Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 50, 650-660, doi:10.1002/2013WR013845.
Use of advanced modeling techniques to optimize thermal packaging designs.
Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar
2010-01-01
Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed during its validation. Thermal packaging is routinely used by the pharmaceutical industry to provide passive and active temperature control of their thermally sensitive products from manufacture through end use (termed the cold chain). In this study, the authors focus on passive temperature control (passive control does not require any external energy source and is entirely based on specific and/or latent heat of shipper components). As temperature-sensitive pharmaceuticals are being transported over longer distances, cold chain reliability is essential. To achieve reliability, a significant amount of time and resources must be invested in design, test, and production of optimized temperature-controlled packaging solutions. To shorten the cumbersome trial and error approach (design/test/design/test …), computer simulation (virtual prototyping and testing of thermal shippers) is a promising method. Although several companies have attempted to develop such a tool, there has been limited success to date. Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a coupled conductive/convective-based thermal shipper. A modeling technique capable of correctly capturing shipper thermal behavior can be used to develop packaging designs more quickly, reducing up-front costs while also improving shipper performance.
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A Realization of Bias Correction Method in the GMAO Coupled System
NASA Technical Reports Server (NTRS)
Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max
2018-01-01
Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
NASA Astrophysics Data System (ADS)
Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon
2007-05-01
Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.
Spherical aberration correction with an in-lens N-fold symmetric line currents model.
Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji
2018-04-01
In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.
Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2005-01-01
Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.
TERRA Battery Thermal Control Anomaly - Simulation and Corrective Actions
NASA Technical Reports Server (NTRS)
Grob, Eric W.
2010-01-01
The TERRA spacecraft was launched in December 1999 from Vandenberg Air Force Base, becoming the flagship of NASA's Earth Observing System program to gather data on how the planet's processes create climate. Originally planned as a 5 year mission, it still provides valuable science data after nearly 10 years on orbit. On October 13th, 2009 at 16:23z following a routine inclination maneuver, TERRA experienced a battery cell failure and a simultaneous failure of several battery heater control circuits used to maintain cell temperatures and gradients within the battery. With several cells nearing the minimum survival temperature, preventing the electrolyte from freezing was the first priority. After several reset attempts and power cycling of the control electronics failed to reestablish control authority on the primary side of the controller, it was switched to the redundant side, but anomalous performance again prevented full heater control of the battery cells. As the investigation into the cause of the anomaly and corrective action continued, a battery thermal model was developed to be used in determining the control ability remaining and to simulate and assess corrective actions. Although no thermal model or detailed reference data of the battery was available, sufficient information was found to allow a simplified model to be constructed, correlated against pre-anomaly telemetry, and used to simulate the thermal behavior at several points after the anomaly. It was then used to simulate subsequent corrective actions to assess their impact on cell temperatures. This paper describes the rapid development of this thermal model, including correlation to flight data before and after the anomaly., along with a comparative assessment of the analysis results used to interpret the telemetry to determine the extent of damage to the thermal control hardware, with near-term corrective actions and long-term operations plan to overcome the anomaly.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2013-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2009-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
Large Eddy Simulation of a Film Cooling Technique with a Plenum
NASA Astrophysics Data System (ADS)
Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund
2012-11-01
Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Michael D.; Olsen, Brett N.; Schlesinger, Paul H.
In mammalian cells cholesterol is essential for membrane function, but in excess can be cytototoxic. The cellular response to acute cholesterol loading involves biophysical-based mechanisms that regulate cholesterol levels, through modulation of the “activity” or accessibility of cholesterol to extra-membrane acceptors. Experiments and united atom (UA) simulations show that at high concentrations of cholesterol, lipid bilayers thin significantly and cholesterol availability to external acceptors increases substantially. Such cholesterol activation is critical to its trafficking within cells. Here we aim to reduce the computational cost to enable simulation of large and complex systems involved in cholesterol regulation, such as those includingmore » oxysterols and cholesterol-sensing proteins. To accomplish this, we have modified the published MARTINI coarse-grained force field to improve its predictions of cholesterol-induced changes in both macroscopic and microscopic properties of membranes. Most notably, MARTINI fails to capture both the (macroscopic) area condensation and membrane thickening seen at less than 30% cholesterol and the thinning seen above 40% cholesterol. The thinning at high concentration is critical to cholesterol activation. Microscopic properties of interest include cholesterol-cholesterol radial distribution functions (RDFs), tilt angle, and accessible surface area. First, we develop an “angle-corrected” model wherein we modify the coarse-grained bond angle potentials based on atomistic simulations. This modification significantly improves prediction of macroscopic properties, most notably the thickening/thinning behavior, and also slightly improves microscopic property prediction relative to MARTINI. Second, we add to the angle correction a “volume correction” by also adjusting phospholipid bond lengths to achieve a more accurate volume per molecule. The angle + volume correction substantially further improves the quantitative agreement of the macroscopic properties (area per molecule and thickness) with united atom simulations. However, this improvement also reduces the accuracy of microscopic predictions like radial distribution functions and cholesterol tilt below that of either MARTINI or the angle-corrected model. Thus, while both of our forcefield corrections improve MARTINI, the combined angle and volume correction should be used for problems involving sterol effects on the overall structure of the membrane, while our angle-corrected model should be used in cases where the properties of individual lipid and sterol models are critically important.« less
Photometric Modeling of Simulated Surace-Resolved Bennu Images
NASA Astrophysics Data System (ADS)
Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.
2017-12-01
The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.
2012-07-01
Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Shraiki, Mario; Arba-Mosquera, Samuel
2011-06-01
To evaluate ablation algorithms and temperature changes in laser refractive surgery. The model (virtual laser system [VLS]) simulates different physical effects of an entire surgical process, simulating the shot-by-shot ablation process based on a modeled beam profile. The model is comprehensive and directly considers applied correction; corneal geometry, including astigmatism; laser beam characteristics; and ablative spot properties. Pulse lists collected from actual treatments were used to simulate the temperature increase during the ablation process. Ablation efficiency reduction in the periphery resulted in a lower peripheral temperature increase. Steep corneas had lesser temperature increases than flat ones. The maximum rise in temperature depends on the spatial density of the ablation pulses. For the same number of ablative pulses, myopic corrections showed the highest temperature increase, followed by myopic astigmatism, mixed astigmatism, phototherapeutic keratectomy (PTK), hyperopic astigmatism, and hyperopic treatments. The proposed model can be used, at relatively low cost, for calibration, verification, and validation of the laser systems used for ablation processes and would directly improve the quality of the results.
Assessing the Added Value of Dynamical Downscaling in the Context of Hydrologic Implication
NASA Astrophysics Data System (ADS)
Lu, M.; IM, E. S.; Lee, M. H.
2017-12-01
There is a scientific consensus that high-resolution climate simulations downscaled by Regional Climate Models (RCMs) can provide valuable refined information over the target region. However, a significant body of hydrologic impact assessment has been performing using the climate information provided by Global Climate Models (GCMs) in spite of a fundamental spatial scale gap. It is probably based on the assumption that the substantial biases and spatial scale gap from GCMs raw data can be simply removed by applying the statistical bias correction and spatial disaggregation. Indeed, many previous studies argue that the benefit of dynamical downscaling using RCMs is minimal when linking climate data with the hydrological model, from the comparison of the impact between bias-corrected GCMs and bias-corrected RCMs on hydrologic simulations. It may be true for long-term averaged climatological pattern, but it is not necessarily the case when looking into variability across various temporal spectrum. In this study, we investigate the added value of dynamical downscaling focusing on the performance in capturing climate variability. For doing this, we evaluate the performance of the distributed hydrological model over the Korean river basin using the raw output from GCM and RCM, and bias-corrected output from GCM and RCM. The impacts of climate input data on streamflow simulation are comprehensively analyzed. [Acknowledgements]This research is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 17AWMP-B083066-04).
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
NASA Astrophysics Data System (ADS)
Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan
2017-10-01
Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
Turbulent flow in a 180 deg bend: Modeling and computations
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
1989-01-01
A low Reynolds number k-epsilon turbulence model was presented which yields accurate predictions of the kinetic energy near the wall. The model is validated with the experimental channel flow data of Kreplin and Eckelmann. The predictions are also compared with earlier results from direct simulation of turbulent channel flow. The model is especially useful for internal flows where the inflow boundary condition of epsilon is not easily prescribed. The model partly derives from some observations based on earlier direct simulation results of near-wall turbulence. The low Reynolds number turbulence model together with an existing curvature correction appropriate to spinning cylinder flows was used to simulate the flow in a U-bend with the same radius of curvature as the Space Shuttle Main Engine (SSME) Turn-Around Duct (TAD). The present computations indicate a space varying curvature correction parameter as opposed to a constant parameter as used in the spinning cylinder flows. Comparison with limited available experimental data is made. The comparison is favorable, but detailed experimental data is needed to further improve the curvature model.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.
Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M
2017-09-12
The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.
Wafer hotspot prevention using etch aware OPC correction
NASA Astrophysics Data System (ADS)
Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao
2016-03-01
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
NASA Astrophysics Data System (ADS)
Abitew, T. A.; Roy, T.; Serrat-Capdevila, A.; van Griensven, A.; Bauwens, W.; Valdes, J. B.
2016-12-01
The Tekeze Basin supports one of Africans largest Arch Dam located in northern Ethiopian has vital role in hydropower generation. However, little has been done on the hydrology of the basin due to limited in situ hydroclimatological data. Therefore, the main objective of this research is to simulate streamflow upstream of the Tekeze Dam using Soil and Water Assessment Tool (SWAT) forced by bias-corrected multiple satellite rainfall products (CMORPH, TMPA and PERSIANN-CCS). This talk will present the potential as well as skills of bias-corrected satellite rainfall products for streamflow prediction in in Tropical Africa. Additionally, the SWAT model results will also be compared with previous conceptual Hydrological models (HyMOD and HBV) from SERVIR Streamflow forecasting in African Basin project (http://www.swaat.arizona.edu/index.html).
NASA Astrophysics Data System (ADS)
Val Martin, M.; Heald, C. L.; Arnold, S. R.
2014-04-01
Dry deposition is an important removal process controlling surface ozone. We examine the representation of this ozone loss mechanism in the Community Earth System Model. We first correct the dry deposition parameterization by coupling the leaf and stomatal vegetation resistances to the leaf area index, an omission which has adversely impacted over a decade of ozone simulations using both the Model for Ozone and Related chemical Tracers (MOZART) and Community Atmospheric Model-Chem (CAM-Chem) global models. We show that this correction increases O3 dry deposition velocities over vegetated regions and improves the simulated seasonality in this loss process. This enhanced removal reduces the previously reported bias in summertime surface O3 simulated over eastern U.S. and Europe. We further optimize the parameterization by scaling down the stomatal resistance used in the Community Land Model to observed values. This in turn further improves the simulation of dry deposition velocity of O3, particularly over broadleaf forested regions. The summertime surface O3 bias is reduced from 30 ppb to 14 ppb over eastern U.S. and 13 ppb to 5 ppb over Europe from the standard to the optimized scheme, respectively. O3 deposition processes must therefore be accurately coupled to vegetation phenology within 3-D atmospheric models, as a first step toward improving surface O3 and simulating O3 responses to future and past vegetation changes.
NASA MUST Paper: Infrared Thermography of Graphite/Epoxy
NASA Technical Reports Server (NTRS)
Comeaux, Kayla; Koshti, Ajay
2010-01-01
The focus of this project is to use Infrared Thermography, a non-destructive test, to detect detrimental cracks and voids beneath the surface of materials used in the space program. This project will consist of developing a simulation model of the Infrared Thermography inspection of the Graphite/Epoxy specimen. The simulation entails finding the correct physical properties for this specimen as well as programming the model for thick voids or flat bottom holes. After the simulation is completed, an Infrared Thermography inspection of the actual specimen will be made. Upon acquiring the experimental test data, an analysis of the data for the actual experiment will occur, which includes analyzing images, graphical analysis, and analyzing numerical data received from the infrared camera. The simulation will then be corrected for any discrepancies between it and the actual experiment. The optimized simulation material property inputs can then be used for new simulation for thin voids. The comparison of the two simulations, the simulation for the thick void and the simulation for the thin void, provides a correlation between the peak contrast ratio and peak time ratio. This correlation is used in the evaluation of flash thermography data during the evaluation of delaminations.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
NASA Technical Reports Server (NTRS)
Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James
2017-01-01
Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength of burning aerosol sources. Our previous work (Petrenko et al., 2012) shows that satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the assumed source strength. We now refine the satellite-snapshot method and investigate applying simple multiplicative emission correction factors for the widely used Global Fire Emission Database version 3 (GFEDv3) emission inventory can achieve regional-scale consistency between MODIS AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. The model and satellite AOD are compared over a set of more than 900 BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. The AOD comparison presented here shows that regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. Additional analysis of including small fire emission correction shows the complimentary nature of correcting for source strength and adding missing sources, and also indicates that in some regions other factors may be significant in explaining model-satellite discrepancies. This work sets the stage for a larger intercomparison within the Aerosol Inter-comparisons between Observations and Models (AeroCom) multi-model biomass burning experiment. We discuss here some of the other possible factors affecting the remaining discrepancies between model simulations and observations, but await comparisons with other AeroCom models to draw further conclusions.
Simulation of Oxygen Disintegration and Mixing With Hydrogen or Helium at Supercritical Pressure
NASA Technical Reports Server (NTRS)
Bellan, Josette; Taskinoglu, Ezgi
2012-01-01
The simulation of high-pressure turbulent flows, where the pressure, p, is larger than the critical value, p(sub c), for the species under consideration, is relevant to a wide array of propulsion systems, e.g. gas turbine, diesel, and liquid rocket engines. Most turbulence models, however, have been developed for atmospheric-p turbulent flows. The difference between atmospheric-p and supercritical-p turbulence is that, in the former situation, the coupling between dynamics and thermodynamics is moderate to negligible, but for the latter it is very significant, and can dominate the flow characteristics. The reason for this stems from the mathematical form of the equation of state (EOS), which is the perfect-gas EOS in the former case, and the real-gas EOS in the latter case. For flows at supercritical pressure, p, the large eddy simulation (LES) equations consist of the differential conservation equations coupled with a real-gas EOS. The equations use transport properties that depend on the thermodynamic variables. Compared to previous LES models, the differential equations contain not only the subgrid scale (SGS) fluxes, but also new SGS terms, each denoted as a correction. These additional terms, typically assumed null for atmospheric pressure flows, stem from filtering the differential governing equations, and represent differences between a filtered term and the same term computed as a function of the filtered flow field. In particular, the energy equation contains a heat-flux correction (q-correction) that is the difference between the filtered divergence of the heat flux and the divergence of the heat flux computed as a function of the filtered flow field. In a previous study, there was only partial success in modeling the q-correction term, but in this innovation, success has been achieved by using a different modeling approach. This analysis, based on a temporal mixing layer Direct Numerical Simulation database, shows that the focus in modeling the q-correction should be on reconstructing the primitive variable gradients rather than their coefficients, and proposes the approximate deconvolution model (ADM) as an effective means of flow field reconstruction for LES heat flux calculation. Further, results for a study conducted for temporal mixing layers initially containing oxygen in the lower stream, and hydrogen or helium in the upper stream, show that, for any LES, including SGS-flux models (constant-coefficient Gradient or Scale-Similarity models, dynamic-coefficient Smagorinsky/Yoshizawa or mixed Smagorinsky/Yoshizawa/Gradient models), the inclusion of the q-correction in the LES leads to the theoretical maximum reduction of the SGS heat-flux difference. The remaining error in modeling this new subgrid term is thus irreducible.
Best opening face system for sweepy, eccentric logs : a user’s guide
David W. Lewis
1985-01-01
Log breakdown simulation models have gained rapid acceptance within the sawmill industry in the last 15 years. Although they have many advantages over traditional decision making tools, the existing models do not calculate yield correctly when used to simulate the breakdown of eccentric, sweepy logs in North American sawmills producing softwood dimension lumber. In an...
Analysis about modeling MEC7000 excitation system of nuclear power unit
NASA Astrophysics Data System (ADS)
Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming
2018-02-01
Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.
Simulating the electrohydrodynamics of a viscous droplet
NASA Astrophysics Data System (ADS)
Theillard, Maxime; Saintillan, David
2016-11-01
We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.
Detached-Eddy Simulations of Attached and Detached Boundary Layers
NASA Astrophysics Data System (ADS)
Caruelle, B.; Ducros, F.
2003-12-01
This article presents Detached-Eddy Simulations (DESs) of attached and detached turbulent boundary layers. This hybrid Reynolds Averaged Navier-Stokes (RANS) / Large Eddy Simulation (LES) model goes continuously from RANS to LES according to the mesh definition. We propose a parametric study of the model over two "academic" configurations, in order to get information on the influence of the mesh to correctly treat complex flow with attached and detached boundary layers.
Coarse-grained modeling of polyethylene melts: Effect on dynamics
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya; ...
2017-05-23
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
Coarse-grained modeling of polyethylene melts: Effect on dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
Use of a computer model in the understanding of erythropoietic control mechanisms
NASA Technical Reports Server (NTRS)
Dunn, C. D. R.
1978-01-01
During an eight-week visit approximately 200 simulations using the computer model for the regulation of erythopoiesis were carries out in four general areas: with the human model simulating hypoxia and dehydration, evaluation of the simulation of dehydration using the mouse model. The experiments led to two considerations for the models. Firstly, a direct relationship between erythropoietin concentration and bone marrow sensitivity to the hormone and, secondly, a partial correction of tissue hypoxia prior to compensation by an increased hematocrit. This latter change in particular produced a better simuation of the effects of hypoxia on plasma erythropoietin concentrations.
Explosive response model evaluation using the explosive H6
NASA Astrophysics Data System (ADS)
Sutherland, Gerrit T.; Burns, Joseph
2000-04-01
Reactive rate model parameters for a two term Lee Tarver [simplified ignition and growth (SIG)] model were obtained for the explosive H6 from modified gap test data. These model was used to perform simulations of the underwater sensitivity test (UST) using the CTH hydrocode. Reaction was predicted in the simulations for the same water gaps that reaction was observed in the UST. The expansions observed for the UST samples were not simulated correctly, and this is attributed to the density equilibrium conditions imposed between unreacted and reacted components in CTH for the Lee-Tarver model.
2D Quantum Transport Modeling in Nanoscale MOSFETs
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density- gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions, oxide tunneling and phase-breaking scattering are treated on equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Quantum simulations are focused on MIT 25, 50 and 90 nm "well- tempered" MOSFETs and compared to classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. These results are quantitatively consistent with I D Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and sub-threshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.
Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects.
Yang, Huizhen; Soloviev, Oleg; Verhaegen, Michel
2015-09-21
A model-based wavefront sensorless (WFSless) adaptive optics (AO) system with a 61-element deformable mirror is simulated to correct the imaging of a turbulence-degraded extended object. A fast closed-loop control algorithm, which is based on the linear relation between the mean square of the aberration gradients and the second moment of the image intensity distribution, is used to generate the control signals for the actuators of the deformable mirror (DM). The restoration capability and the convergence rate of the AO system are investigated with different turbulence strength wave-front aberrations. Simulation results show the model-based WFSless AO system can restore those images degraded by different turbulence strengths successfully and obtain the correction very close to the achievable capability of the given DM. Compared with the ideal correction of 61-element DM, the averaged relative error of RMS value is 6%. The convergence rate of AO system is independent of the turbulence strength and only depends on the number of actuators of DM.
Interactive computer simulations of knee-replacement surgery.
Gunther, Stephen B; Soto, Gabriel E; Colman, William W
2002-07-01
Current surgical training programs in the United States are based on an apprenticeship model. This model is outdated because it does not provide conceptual scaffolding, promote collaborative learning, or offer constructive reinforcement. Our objective was to create a more useful approach by preparing students and residents for operative cases using interactive computer simulations of surgery. Total-knee-replacement surgery (TKR) is an ideal procedure to model on the computer because there is a systematic protocol for the procedure. Also, this protocol is difficult to learn by the apprenticeship model because of the multiple instruments that must be used in a specific order. We designed an interactive computer tutorial to teach medical students and residents how to perform knee-replacement surgery. We also aimed to reinforce the specific protocol of the operative procedure. Our final goal was to provide immediate, constructive feedback. We created a computer tutorial by generating three-dimensional wire-frame models of the surgical instruments. Next, we applied a surface to the wire-frame models using three-dimensional modeling. Finally, the three-dimensional models were animated to simulate the motions of an actual TKR. The tutorial is a step-by-step tutorial that teaches and tests the correct sequence of steps in a TKR. The student or resident must select the correct instruments in the correct order. The learner is encouraged to learn the stepwise surgical protocol through repetitive use of the computer simulation. Constructive feedback is acquired through a grading system, which rates the student's or resident's ability to perform the task in the correct order. The grading system also accounts for the time required to perform the simulated procedure. We evaluated the efficacy of this teaching technique by testing medical students who learned by the computer simulation and those who learned by reading the surgical protocol manual. Both groups then performed TKR on manufactured bone models using real instruments. Their technique was graded with the standard protocol. The students who learned on the computer simulation performed the task in a shorter time and with fewer errors than the control group. They were also more engaged in the learning process. Surgical training programs generally lack a consistent approach to preoperative education related to surgical procedures. This interactive computer tutorial has allowed us to make a quantum leap in medical student and resident teaching in our orthopedic department because the students actually participate in the entire process. Our technique provides a linear, sequential method of skill acquisition and direct feedback, which is ideally suited for learning stepwise surgical protocols. Since our initial evaluation has shown the efficacy of this program, we have implemented this teaching tool into our orthopedic curriculum. Our plans for future work with this simulator include modeling procedures involving other anatomic areas of interest, such as the hip and shoulder.
Use of Airborne Hyperspectral Data in the Simulation of Satellite Images
NASA Astrophysics Data System (ADS)
de Miguel, Eduardo; Jimenez, Marcos; Ruiz, Elena; Salido, Elena; Gutierrez de la Camara, Oscar
2016-08-01
The simulation of future images is part of the development phase of most Earth Observation missions. This simulation uses frequently as starting point images acquired from airborne instruments. These instruments provide the required flexibility in acquisition parameters (time, date, illumination and observation geometry...) and high spectral and spatial resolution, well above the target values (as required by simulation tools). However, there are a number of important problems hampering the use of airborne imagery. One of these problems is that observation zenith angles (OZA), are far from those that the misisons to be simulated would use.We examine this problem by evaluating the difference in ground reflectance estimated from airborne images for different observation/illumination geometries. Next, we analyze a solution for simulation purposes, in which a Bi- directional Reflectance Distribution Function (BRDF) model is attached to an image of the isotropic surface reflectance. The results obtained confirm the need for reflectance anisotropy correction when using airborne images for creating a reflectance map for simulation purposes. But this correction should not be used without providing the corresponding estimation of BRDF, in the form of model parameters, to the simulation teams.
Dasari, Paul K. R.; Könik, Arda; Pretorius, P. Hendrik; Johnson, Karen L.; Segars, William P.; Shazeeb, Mohammed. S.; King, Michael A.
2017-01-01
Purpose Amplitude based respiratory gating is known to capture the extent of respiratory motion (RM) accurately but results in residual motion in the presence of respiratory hysteresis. In our previous study, we proposed and developed a novel approach to account for respiratory hysteresis by applying the Bouc-Wen (BW) model of hysteresis to external surrogate signals of anterior / posterior motion of the abdomen and chest with respiration. In this work using simulated and clinical SPECT myocardial perfusion imaging (MPI) studies, we investigate the effects of respiratory hysteresis and evaluate the benefit of correcting it using the proposed BW model in comparison with the abdomen signal typically employed clinically. Methods The MRI navigator data acquired in free breathing human volunteers were used in the specially modified 4-D NCAT phantoms to allow simulating three types of respiratory patterns: monotonic, mild-hysteresis, and strong-hysteresis with normal myocardial uptake, and perfusion defects in the anterior, lateral, inferior, and septal locations of the mid-ventricular wall. Clinical scans were performed using a 99mTc-Sestamibi MPI protocol while recording respiratory signals from thoracic and abdomen regions using a Visual Tracking System (VTS). The performance of the correction using the respiratory signals was assessed through polar map analysis in phantom and ten clinical studies selected on the basis of having substantial RM. Results In phantom studies, simulations illustrating normal myocardial uptake showed significant differences (p<0.001) in the uniformity of the polar maps between the RM uncorrected and corrected. No significant differences were seen in the polar map uniformity across the RM corrections. Studies simulating perfusion defects showed significantly decreased errors (p<0.001) in defect severity and extent for the RM corrected compared to the uncorrected. Only for the strong-hysteretic pattern was there a significant difference (p<0.001) among the RM corrections. The errors in defect severity and extent for the RM correction using abdomen signal were significantly higher compared to that of the BW (severity=-4.0%, p<0.001; extent=-65.4%, p<0.01) and chest (severity=-4.1%, p<0.001; extent=-52.5%, p<0.01) signals. In clinical studies, the quantitative analysis of the polar maps demonstrated qualitative and quantitative but not statistically significant differences (p=0.73) between the correction methods that used the BW signal and the abdominal signal. Conclusions This study shows that hysteresis in respiration affects the extent of residual motion left in the RM binned data, which can impact wall uniformity and the visualization of defects. Thus there appears to be the potential for improved accuracy in reconstruction in the presence of hysteretic RM with the BW model method providing a possible step in the direction of improvement. PMID:28032913
2012-03-01
such as FASCODE is accomplished. The assessment is limited by the correctness of the models used; validating the models is beyond the scope of this...comparisons with other models and validation against data sets (Snell et al. 2000). 2.3.2 Previous Research Several LADAR simulations have been produced...performance models would better capture the atmosphere physics and climatological effects on these systems. Also, further validation needs to be performed
Transient Spectra in TDDFT: Corrections and Correlations
NASA Astrophysics Data System (ADS)
Parkhill, John; Nguyen, Triet
We introduce an atomistic, all-electron, black-box electronic structure code to simulate transient absorption (TA) spectra and apply it to simulate pyrazole and a GFP chromophore derivative. The method is an application of OSCF2, our dissipative extension of time-dependent density functional theory. We compare our simulated spectra directly with recent ultra-fast spectroscopic experiments, showing that they are usefully predicted. We also relate bleaches in the TA signal to Fermi-blocking which would be missed in a simplified model. An important ingredient in the method is the stationary-TDDFT correction scheme recently put forwards by Fischer, Govind, and Cramer which allows us to overcome a limitation of adiabatic TDDFT. We demonstrate that OSCF2 is able to predict both the energies of bleaches and induced absorptions, as well as the decay of the transient spectrum, with only the molecular structure as input. With remaining time we will discuss corrections which resolve the non-resonant behavior of driven TDDFT, and correlated corrections to mean-field dynamics.
Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging
NASA Astrophysics Data System (ADS)
Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.
1997-02-01
Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.
NASA Astrophysics Data System (ADS)
Zhao, Shaorong; Takemoto, Shuzo
2000-08-01
The interseismic deformation associated with plate coupling at a subduction zone is commonly simulated by the steady-slip model in which a reverse dip-slip is imposed on the down-dip extension of the locked plate interface, or by the backslip model in which a normal slip is imposed on the locked plate interface. It is found that these two models, although totally different in principle, produce similar patterns for the vertical deformation at a subduction zone. This suggests that it is almost impossible to distinguish between these two models by analysing only the interseismic vertical deformation observed at a subduction zone. The steady-slip model cannot correctly predict the horizontal deformation associated with plate coupling at a subduction zone, a fact that is proved by both the numerical modelling in this study and the GPS (Global Positioning System) observations near the Nankai trough, southwest Japan. It is therefore inadequate to simulate the effect of the plate coupling at a subduction zone by the steady-slip model. It is also revealed that the unphysical assumption inherent in the backslip model of imposing a normal slip on the locked plate interface makes it impossible to predict correctly the horizontal motion of the subducted plate and the stress change within the overthrust zone associated with the plate coupling during interseismic stages. If the analysis made in this work is proved to be correct, some of the previous studies on interpreting the interseismic deformation observed at several subduction zones based on these two models might need substantial revision. On the basis of the investigations on plate interaction at subduction zones made using the finite element method and the kinematic/mechanical conditions of the plate coupling implied by the present plate tectonics, a synthesized model is proposed to simulate the kinematic effect of the plate interaction during interseismic stages. A numerical analysis shows that the proposed model, designed to simulate the motion of a subducted slab, can correctly produce the deformation and the main pattern of stress concentration associated with plate coupling at a subduction zone. The validity of the synthesized model is examined and partially verified by analysing the horizontal deformation observed by GPS near the Nankai trough, southwest Japan.
Use of regional climate model output for hydrologic simulations
Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.
2002-01-01
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Weeks, John D.
2009-12-01
Coulomb interactions are present in a wide variety of all-atom force fields. Spherical truncations of these interactions permit fast simulations but are problematic due to their incorrect thermodynamics. Herein we demonstrate that simple analytical corrections for the thermodynamics of uniform truncated systems are possible. In particular, results for the simple point charge/extended (SPC/E) water model treated with spherically truncated Coulomb interactions suggested by local molecular field theory [J. M. Rodgers and J. D. Weeks, Proc. Natl. Acad. Sci. U.S.A. 105, 19136 (2008)] are presented. We extend the results developed by Chandler [J. Chem. Phys. 65, 2925 (1976)] so that we may treat the thermodynamics of mixtures of flexible charged and uncharged molecules simulated with spherical truncations. We show that the energy and pressure of spherically truncated bulk SPC/E water are easily corrected using exact second-moment-like conditions on long-ranged structure. Furthermore, applying the pressure correction as an external pressure removes the density errors observed by other research groups in NPT simulations of spherically truncated bulk species.
Improving material removal determinacy based on the compensation of tool influence function
NASA Astrophysics Data System (ADS)
Zhong, Bo; Chen, Xian-hua; Deng, Wen-hui; Zhao, Shi-jie; Zheng, Nan
2018-03-01
In the process of computer-controlled optical surfacing (CCOS), the key of correcting the surface error of optical components is to ensure the consistency between the simulated tool influence function and the actual tool influence function (TIF). The existing removal model usually adopts the fixed-point TIF to remove the material with the planning path and velocity, and it considers that the polishing process is linear and time invariant. However, in the actual polishing process, the TIF is a function related to the feed speed. In this paper, the relationship between the actual TIF and the feed speed (i.e. the compensation relationship between static removal and dynamic removal) is determined by experimental method. Then, the existing removal model is modified based on the compensation relationship, to improve the conformity between simulated and actual processing. Finally, the surface error modification correction test are carried out. The results show that the fitting degree of the simulated surface and the experimental surface is better than 88%, and the surface correction accuracy can be better than 1/10 λ (Λ=632.8nm).
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E
2015-11-11
The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal models. Overall results suggest that adaptation and anticipation mechanisms both play an important role during sensorimotor synchronization with tempo-changing sequences. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches
NASA Astrophysics Data System (ADS)
Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael
2011-09-01
Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.
2007-06-25
The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.
NASA Astrophysics Data System (ADS)
Esrael, D.; Kacem, M.; Benadda, B.
2017-07-01
We investigate how the simulation of the venting/soil vapour extraction (SVE) process is affected by the mass transfer coefficient, using a model comprising five partial differential equations describing gas flow and mass conservation of phases and including an expression accounting for soil saturation conditions. In doing so, we test five previously reported quations for estimating the non-aqueous phase liquid (NAPL)/gas initial mass transfer coefficient and evaluate an expression that uses a reference NAPL saturation. Four venting/SVE experiments utilizing a sand column are performed with dry and non-saturated sand at low and high flow rates, and the obtained experimental results are subsequently simulated, revealing that hydrodynamic dispersion cannot be neglected in the estimation of the mass transfer coefficient, particularly in the case of low velocities. Among the tested models, only the analytical solution of a convection-dispersion equation and the equation proposed herein are suitable for correctly modelling the experimental results, with the developed model representing the best choice for correctly simulating the experimental results and the tailing part of the extracted gas concentration curve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. M. Sexton,; A. M. Sadeghi,; X. Zhang,
The value of watershed-scale, hydrologic and water quality models to ecosystem management is increasingly evident as more programs adopt these tools to evaluate the effectiveness of different management scenarios and their impact on the environment. Quality of precipitation data is critical for appropriate application of watershed models. In small watersheds, where no dense rain gauge network is available, modelers are faced with a dilemma to choose between different data sets. In this study, we used the German Branch (GB) watershed (~50 km 2), which is included in the USDA Conservation Effects Assessment Project (CEAP), to examine the implications of usingmore » surface rain gauge and next-generation radar (NEXRAD) precipitation data sets on the performance of the Soil and Water Assessment Tool (SWAT). The GB watershed is located in the Coastal Plain of Maryland on the eastern shore of Chesapeake Bay. Stream flow estimation results using surface rain gauge data seem to indicate the importance of using rain gauges within the same direction as the storm pattern with respect to the watershed. In the absence of a spatially representative network of rain gauges within the watershed, NEXRAD data produced good estimates of stream flow at the outlet of the watershed. Three NEXRAD datasets, including (1)*non-corrected (NC), (2) bias-corrected (BC), and (3) inverse distance weighted (IDW) corrected NEXRAD data, were produced. Nash-Sutcliffe efficiency coefficients for daily stream flow simulation using these three NEXRAD data ranged from 0.46 to 0.58 during calibration and from 0.68 to 0.76 during validation. Overall, correcting NEXRAD with rain gauge data is promising to produce better hydrologic modeling results. Given the multiple precipitation datasets and corresponding simulations, we explored the combination of the multiple simulations using Bayesian model averaging.« less
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Flanders, W Dana; Strickland, Matthew J; Klein, Mitchel
2017-05-15
Methods exist to detect residual confounding in epidemiologic studies. One requires a negative control exposure with 2 key properties: 1) conditional independence of the negative control and the outcome (given modeled variables) absent confounding and other model misspecification, and 2) associations of the negative control with uncontrolled confounders and the outcome. We present a new method to partially correct for residual confounding: When confounding is present and our assumptions hold, we argue that estimators from models that include a negative control exposure with these 2 properties tend to be less biased than those from models without it. Using regression theory, we provide theoretical arguments that support our claims. In simulations, we empirically evaluated the approach using a time-series study of ozone effects on asthma emergency department visits. In simulations, effect estimators from models that included the negative control exposure (ozone concentrations 1 day after the emergency department visit) had slightly or modestly less residual confounding than those from models without it. Theory and simulations show that including the negative control can reduce residual confounding, if our assumptions hold. Our method differs from available methods because it uses a regression approach involving an exposure-based indicator rather than a negative control outcome to partially correct for confounding. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Estimating non-circular motions in barred galaxies using numerical N-body simulations
NASA Astrophysics Data System (ADS)
Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.
2015-12-01
The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.
Modeling skull's acoustic attenuation and dispersion on photoacoustic signal
NASA Astrophysics Data System (ADS)
Mohammadi, L.; Behnam, H.; Nasiriavanaki, M. R.
2017-03-01
Despite the great promising results of a recent new transcranial photoacoustic brain imaging technology, it has been shown that the presence of the skull severely affects the performance of this imaging modality. In this paper, we investigate the effect of skull on generated photoacoustic signals with a mathematical model. The developed model takes into account the frequency dependence attenuation and acoustic dispersion effects occur with the wave reflection and refraction at the skull surface. Numerical simulations based on the developed model are performed for calculating the propagation of photoacoustic waves through the skull. From the simulation results, it was found that the skull-induced distortion becomes very important and the reconstructed image would be strongly distorted without correcting these effects. In this regard, it is anticipated that an accurate quantification and modeling of the skull transmission effects would ultimately allow for skull aberration correction in transcranial photoacoustic brain imaging.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
NASA Technical Reports Server (NTRS)
Shackelford, John H.; Saugen, John D.; Wurst, Michael J.; Adler, James
1991-01-01
A generic planar 3 degree of freedom simulation was developed that supports hardware in the loop simulations, guidance and control analysis, and can directly generate flight software. This simulation was developed in a small amount of time utilizing rapid prototyping techniques. The approach taken to develop this simulation tool, the benefits seen using this approach to development, and on-going efforts to improve and extend this capability are described. The simulation is composed of 3 major elements: (1) Docker dynamics model, (2) Dockee dynamics model, and (3) Docker Control System. The docker and dockee models are based on simple planar orbital dynamics equations using a spherical earth gravity model. The docker control system is based on a phase plane approach to error correction.
Shao, J Y; Shu, C; Huang, H B; Chew, Y T
2014-03-01
A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.
Modified social force model based on information transmission toward crowd evacuation simulation
NASA Astrophysics Data System (ADS)
Han, Yanbin; Liu, Hong
2017-03-01
In this paper, the information transmission mechanism is introduced into the social force model to simulate pedestrian behavior in an emergency, especially when most pedestrians are unfamiliar with the evacuation environment. This modified model includes a collision avoidance strategy and an information transmission model that considers information loss. The former is used to avoid collision among pedestrians in a simulation, whereas the latter mainly describes how pedestrians obtain and choose directions appropriate to them. Simulation results show that pedestrians can obtain the correct moving direction through information transmission mechanism and that the modified model can simulate actual pedestrian behavior during an emergency evacuation. Moreover, we have drawn four conclusions to improve evacuation based on the simulation results; and these conclusions greatly contribute in optimizing a number of efficient emergency evacuation schemes for large public places.
NASA Technical Reports Server (NTRS)
Shykoff, Barbara E.; Swanson, Harvey T.
1987-01-01
A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.
Tropical Indian Ocean warming contributions to China winter climate trends since 1960
NASA Astrophysics Data System (ADS)
Wu, Qigang; Yao, Yonghong; Liu, Shizuo; Cao, DanDan; Cheng, Luyao; Hu, Haibo; Sun, Leng; Yao, Ying; Yang, Zhiqi; Gao, Xuxu; Schroeder, Steven R.
2018-01-01
This study investigates observed and modeled contributions of global sea surface temperature (SST) to China winter climate trends in 1960-2014, including increased precipitation, warming through about 1997, and cooling since then. Observations and Atmospheric Model Intercomparison Project (AMIP) simulations with prescribed historical SST and sea ice show that tropical Indian Ocean (TIO) warming and increasing rainfall causes diabatic heating that generates a tropospheric wave train with anticyclonic 500-hPa height anomaly centers in the TIO or equatorial western Pacific (TIWP) and northeastern Eurasia (EA) and a cyclonic anomaly over China, referred to as the TIWP-EA wave train. The cyclonic anomaly causes Indochina moisture convergence and southwesterly moist flow that enhances South China precipitation, while the northern anticyclone enhances cold surges, sometimes causing severe ice storms. AMIP simulations show a 1960-1997 China cooling trend by simulating increasing instead of decreasing Arctic 500-hPa heights that move the northern anticyclone into Siberia, but enlarge the cyclonic anomaly so it still simulates realistic China precipitation trend patterns. A separate idealized TIO SST warming simulation simulates the TIWP-EA feature more realistically with correct precipitation patterns and supports the TIWP-EA teleconnection as the primary mechanism for long-term increasing precipitation in South China since 1960. Coupled Model Intercomparison Project (CMIP) experiments simulate a reduced TIO SST warming trend and weak precipitation trends, so the TIWP-EA feature is absent and strong drying is simulated in South China for 1960-1997. These simulations highlight the need for accurately modeled SST to correctly attribute regional climate trends.
NASA Astrophysics Data System (ADS)
Mehrotra, Rajeshwar; Sharma, Ashish
2012-12-01
The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Nakajima, Teruyuki; Khain, Alexander P.; Saito, Kazuo; Takemura, Toshihiko; Okamoto, Hajime; Nishizawa, Tomoaki; Tao, Wei-Kuo
2012-01-01
Numerical weather prediction (NWP) simulations using the Japan Meteorological Agency NonhydrostaticModel (JMA-NHM) are conducted for three precipitation events observed by shipborne or spaceborneW-band cloud radars. Spectral bin and single-moment bulk cloud microphysics schemes are employed separatelyfor an intercomparative study. A radar product simulator that is compatible with both microphysicsschemes is developed to enable a direct comparison between simulation and observation with respect to theequivalent radar reflectivity factor Ze, Doppler velocity (DV), and path-integrated attenuation (PIA). Ingeneral, the bin model simulation shows better agreement with the observed data than the bulk modelsimulation. The correction of the terminal fall velocities of snowflakes using those of hail further improves theresult of the bin model simulation. The results indicate that there are substantial uncertainties in the masssizeand sizeterminal fall velocity relations of snowflakes or in the calculation of terminal fall velocity of snowaloft. For the bulk microphysics, the overestimation of Ze is observed as a result of a significant predominanceof snow over cloud ice due to substantial deposition growth directly to snow. The DV comparison shows thata correction for the fall velocity of hydrometeors considering a change of particle size should be introducedeven in single-moment bulk cloud microphysics.
Simulation-based artifact correction (SBAC) for metrological computed tomography
NASA Astrophysics Data System (ADS)
Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc
2017-06-01
Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.
Geant4 Modifications for Accurate Fission Simulations
NASA Astrophysics Data System (ADS)
Tan, Jiawei; Bendahan, Joseph
Monte Carlo is one of the methods to simulate the generation and transport of radiation through matter. The most widely used radiation simulation codes are MCNP and Geant4. The simulation of fission production and transport by MCNP has been thoroughly benchmarked. There is an increasing number of users that prefer using Geant4 due to the flexibility of adding features. However, it has been found that Geant4 does not have the proper fission-production cross sections and does not produce the correct fission products. To achieve accurate results for studies in fissionable material applications, Geant4 was modified to correct these inaccuracies and to add new capabilities. The fission model developed by the Lawrence Livermore National Laboratory was integrated into the neutron-fission modeling package. The photofission simulation capability was enabled using the same neutron-fission library under the assumption that nuclei fission in the same way, independent of the excitation source. The modified fission code provides the correct multiplicity of prompt neutrons and gamma rays, and produces delayed gamma rays and neutrons with time and energy dependencies that are consistent with ENDF/B-VII. The delayed neutrons are now directly produced by a custom package that bypasses the fragment cascade model. The modifications were made for U-235, U-238 and Pu-239 isotopes; however, the new framework allows adding new isotopes easily. The SLAC nuclear data library is used for simulation of isotopes with an atomic number above 92 because it is not available in Geant4. Results of the modified Geant4.10.1 package of neutron-fission and photofission for prompt and delayed radiation are compared with ENDFB-VII and with results produced with the original package.
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Zhao, Xiaohui
2017-02-01
The sensor-less adaptive optics (AO) is one of the most promising methods to compensate strong wave front disturbance in free space optics communication (FSO). The back propagation (BP) artificial neural network is applied for the sensor-less AO system to design a distortion correction scheme in this study. This method only needs one or a few online measurements to correct the wave front distortion compared with other model-based approaches, by which the real-time capacity of the system is enhanced and the Strehl Ratio (SR) is largely improved. Necessary comparisons in numerical simulation with other model-based and model-free correction methods proposed in Refs. [6,8,9,10] are given to show the validity and advantage of the proposed method.
Reduced atomic pair-interaction design (RAPID) model for simulations of proteins.
Ni, Boris; Baumketner, Andrij
2013-02-14
Increasingly, theoretical studies of proteins focus on large systems. This trend demands the development of computational models that are fast, to overcome the growing complexity, and accurate, to capture the physically relevant features. To address this demand, we introduce a protein model that uses all-atom architecture to ensure the highest level of chemical detail while employing effective pair potentials to represent the effect of solvent to achieve the maximum speed. The effective potentials are derived for amino acid residues based on the condition that the solvent-free model matches the relevant pair-distribution functions observed in explicit solvent simulations. As a test, the model is applied to alanine polypeptides. For the chain with 10 amino acid residues, the model is found to reproduce properly the native state and its population. Small discrepancies are observed for other folding properties and can be attributed to the approximations inherent in the model. The transferability of the generated effective potentials is investigated in simulations of a longer peptide with 25 residues. A minimal set of potentials is identified that leads to qualitatively correct results in comparison with the explicit solvent simulations. Further tests, conducted for multiple peptide chains, show that the transferable model correctly reproduces the experimentally observed tendency of polyalanines to aggregate into β-sheets more strongly with the growing length of the peptide chain. Taken together, the reported results suggest that the proposed model could be used to succesfully simulate folding and aggregation of small peptides in atomic detail. Further tests are needed to assess the strengths and limitations of the model more thoroughly.
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Interfacial ion solvation: Obtaining the thermodynamic limit from molecular simulations
NASA Astrophysics Data System (ADS)
Cox, Stephen J.; Geissler, Phillip L.
2018-06-01
Inferring properties of macroscopic solutions from molecular simulations is complicated by the limited size of systems that can be feasibly examined with a computer. When long-ranged electrostatic interactions are involved, the resulting finite size effects can be substantial and may attenuate very slowly with increasing system size, as shown by previous work on dilute ions in bulk aqueous solution. Here we examine corrections for such effects, with an emphasis on solvation near interfaces. Our central assumption follows the perspective of Hünenberger and McCammon [J. Chem. Phys. 110, 1856 (1999)]: Long-wavelength solvent response underlying finite size effects should be well described by reduced models like dielectric continuum theory, whose size dependence can be calculated straightforwardly. Applied to an ion in a periodic slab of liquid coexisting with vapor, this approach yields a finite size correction for solvation free energies that differs in important ways from results previously derived for bulk solution. For a model polar solvent, we show that this new correction quantitatively accounts for the variation of solvation free energy with volume and aspect ratio of the simulation cell. Correcting periodic slab results for an aqueous system requires an additional accounting for the solvent's intrinsic charge asymmetry, which shifts electric potentials in a size-dependent manner. The accuracy of these finite size corrections establishes a simple method for a posteriori extrapolation to the thermodynamic limit and also underscores the realism of dielectric continuum theory down to the nanometer scale.
King, Matthew D; Buchanan, William D; Korter, Timothy M
2011-03-14
The effects of applying an empirical dispersion correction to solid-state density functional theory methods were evaluated in the simulation of the crystal structure and low-frequency (10 to 90 cm(-1)) terahertz spectrum of the non-steroidal anti-inflammatory drug, naproxen. The naproxen molecular crystal is bound largely by weak London force interactions, as well as by more prominent interactions such as hydrogen bonding, and thus serves as a good model for the assessment of the pair-wise dispersion correction term in systems influenced by intermolecular interactions of various strengths. Modifications to the dispersion parameters were tested in both fully optimized unit cell dimensions and those determined by X-ray crystallography, with subsequent simulations of the THz spectrum being performed. Use of the unmodified PBE density functional leads to an unrealistic expansion of the unit cell volume and the poor representation of the THz spectrum. Inclusion of a modified dispersion correction enabled a high-quality simulation of the THz spectrum and crystal structure of naproxen to be achieved without the need for artificially constraining the unit cell dimensions.
ForCent model development and testing using the Enriched Background Isotope Study experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, W.J.; Hanson, P. J.; Swanston, C.
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
Observation of the pressure effect in simulations of droplets splashing on a dry surface
NASA Astrophysics Data System (ADS)
Boelens, A. M. P.; Latka, A.; de Pablo, J. J.
2018-06-01
At atmospheric pressure, a drop of ethanol impacting on a solid surface produces a splash. Reducing the ambient pressure below its atmospheric value suppresses this splash. The origin of this so-called pressure effect is not well understood, and this study presents an in-depth comparison between various theoretical models that aim to predict splashing and simulations. In this paper, the pressure effect is explored numerically by resolving the Navier-Stokes equations at a 3-nm resolution. In addition to reproducing numerous experimental observations, it is found that different models all provide elements of what is observed in the simulations. The skating droplet model correctly predicts the existence and scaling of a gas film under the droplet, the lamella formation theory is able to correctly predict the scaling of the lamella ejection velocity as a function of the impact velocity for liquids with different viscosity, and lastly, the dewetting theory's hypothesis of a lift force acting on the liquid sheet after ejection is consistent with our results.
Enríquez, Diego; Lamborizio, María J; Firenze, Lorena; Jaureguizar, María de la P; Díaz Pumará, Estanislao; Szyld, Edgardo
2017-08-01
To evaluate the performance of resident physicians in diagnosing and treating a case of anaphylaxis, six months after participating in simulation training exercises. Initially, a group of pediatric residents were trained using simulation techniques in the management of critical pediatric cases. Based on their performance in this exercise, participants were assigned to one of 3 groups. At six months post-training, 4 residents were randomly chosen from each group to be re-tested, using the same performance measure as previously used. During the initial training session, 56 of 72 participants (78%) correctly identified and treated the case. Six months after the initial training, all 12 (100%) resident physicians who were re-tested successfully diagnosed and treated the simulated anaphylaxis case. The training through simulation techniques allowed correction or optimization of the treatment of simulated anaphylaxis cases in resident physicians evaluated after 6 months of the initial training.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
McKenzie, J.M.; Voss, C.I.; Siegel, D.I.
2007-01-01
In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.
Simulation validation of the XV-15 tilt-rotor research aircraft
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Hanson, G. D.; Churchill, G. B.
1984-01-01
The results of a simulation validation program of the XV-15 tilt-rotor research aircraft are detailed, covering such simulation aspects as the mathematical model, visual system, motion system, cab aural system, cab control loader system, pilot perceptual fidelity, and generic tilt rotor applications. Simulation validation was performed for the hover, low-speed, and sideward flight modes, with consideration of the in-ground rotor effect. Several deficiencies of the mathematical model and the simulation systems were identified in the course of the simulation validation project, and some were corrected. It is noted that NASA's Vertical Motion Simulator used in the program is an excellent tool for tilt-rotor and rotorcraft design, development, and pilot training.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hagemann, Alexander; Rohr, Karl; Stiehl, H. Siegfried
2000-06-01
In order to improve the accuracy of image-guided neurosurgery, different biomechanical models have been developed to correct preoperative images w.r.t. intraoperative changes like brain shift or tumor resection. All existing biomechanical models simulate different anatomical structures by using either appropriate boundary conditions or by spatially varying material parameter values, while assuming the same physical model for all anatomical structures. In general, this leads to physically implausible results, especially in the case of adjacent elastic and fluid structures. Therefore, we propose a new approach which allows to couple different physical models. In our case, we simulate rigid, elastic, and fluid regions by using the appropriate physical description for each material, namely either the Navier equation or the Stokes equation. To solve the resulting differential equations, we derive a linear matrix system for each region by applying the finite element method (FEM). Thereafter, the linear matrix systems are linked together, ending up with one overall linear matrix system. Our approach has been tested using synthetic as well as tomographic images. It turns out from experiments, that the integrated treatment of rigid, elastic, and fluid regions significantly improves the prediction results in comparison to a pure linear elastic model.
Climate model biases and statistical downscaling for application in hydrologic model
USDA-ARS?s Scientific Manuscript database
Climate change impact studies use global climate model (GCM) simulations to define future temperature and precipitation. The best available bias-corrected GCM output was obtained from Coupled Model Intercomparison Project phase 5 (CMIP5). CMIP5 data (temperature and precipitation) are available in d...
Quantitative Evaluation of PET Respiratory Motion Correction Using MR Derived Simulated Data
NASA Astrophysics Data System (ADS)
Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.
2015-12-01
The impact of respiratory motion correction on quantitative accuracy in PET imaging is evaluated using simulations for variable patient specific characteristics such as tumor uptake and respiratory pattern. Respiratory patterns from real patients were acquired, with long quiescent motion periods (type-1) as commonly observed in most patients and with long-term amplitude variability as is expected under conditions of difficult breathing (type-2). The respiratory patterns were combined with an MR-derived motion model to simulate real-time 4-D PET-MR datasets. Lung and liver tumors were simulated with diameters of 10 and 12 mm and tumor-to-background ratio ranging from 3:1 to 6:1. Projection data for 6- and 3-mm PET resolution were generated for the Philips Gemini scanner and reconstructed without and with motion correction using OSEM (2 iterations, 23 subsets). Motion correction was incorporated into the reconstruction process based on MR-derived motion fields. Tumor peak standardized uptake values (SUVpeak) were calculated from 30 noise realizations. Respiratory motion correction improves the quantitative performance with the greatest benefit observed for patients of breathing type-2. For breathing type-1 after applying motion correction, SUVpeak of 12-mm liver tumor with 6:1 contrast was increased by 46% for a current PET resolution (i.e., 6 mm) and by 47% for a higher PET resolution (i.e., 3 mm). Furthermore, the results of this study indicate that the benefit of higher scanner resolution is small unless motion correction is applied. In particular, for large liver tumor (12 mm) with low contrast (3:1) after motion correction, the SUVpeak was increased by 34% for 6-mm resolution and by 50% for a higher PET resolution (i.e., 3-mm resolution. This investigation indicates that there is a high impact of respiratory motion correction on tumor quantitative accuracy and that motion correction is important in order to benefit from the increased resolution of future PET scanners.
Cylinder-averaged histories of nitrogen oxide in a DI diesel with simulated turbocharging
NASA Astrophysics Data System (ADS)
Donahue, Ronald J.; Borman, Gary L.; Bower, Glenn R.
1994-10-01
An experimental study was conducted using the dumping technique (total cylinder sampling) to produce cylinder mass-averaged nitric oxide histories. Data were taken using a four stroke diesel research engine employing a quiescent chamber, high pressure direct injection fuel system, and simulated turbocharging. Two fuels were used to determine fuel cetane number effects. Two loads were run, one at an equivalence ratio of 0.5 and the other at a ratio of 0.3. The engine speed was held constant at 1500 rpm. Under the turbocharged and retarded timing conditions of this study, nitric oxide was produced up to the point of about 85% mass burned. Two different models were used to simulate the engine mn conditions: the phenomenological Hiroyasu spray-combustion model, and the three dimensional, U.W.-ERO modified KIVA-2 computational fluid dynamic code. Both of the models predicted the correct nitric oxide trend. Although the modified KIVA-2 combustion model using Zeldovich kinetics correctly predicted the shapes of the nitric oxide histories, it did not predict the exhaust concentrations without arbitrary adjustment based on experimental values.
Toward transient finite element simulation of thermal deformation of machine tools in real-time
NASA Astrophysics Data System (ADS)
Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg
2018-01-01
Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Models for twistable elastic polymers in Brownian dynamics, and their implementation for LAMMPS.
Brackley, C A; Morozov, A N; Marenduzzo, D
2014-04-07
An elastic rod model for semi-flexible polymers is presented. Theory for a continuum rod is reviewed, and it is shown that a popular discretised model used in numerical simulations gives the correct continuum limit. Correlation functions relating to both bending and twisting of the rod are derived for both continuous and discrete cases, and results are compared with numerical simulations. Finally, two possible implementations of the discretised model in the multi-purpose molecular dynamics software package LAMMPS are described.
Luis, Daniel Porfirio; García-González, Alcione; Saint-Martin, Humberto
2016-01-01
Monte Carlo and molecular dynamics simulations were done with three recent water models TIP4P/2005 (Transferable Intermolecular Potential with 4 Points/2005), TIP4P/Ice (Transferable Intermolecular Potential with 4 Points/ Ice) and TIP4Q (Transferable Intermolecular Potential with 4 charges) combined with two models for methane: an all-atom one OPLS-AA (Optimal Parametrization for the Liquid State) and a united-atom one (UA); a correction for the C–O interaction was applied to the latter and used in a third set of simulations. The models were validated by comparison to experimental values of the free energy of hydration at 280, 300, 330 and 370 K, all under a pressure of 1 bar, and to the experimental radial distribution functions at 277, 283 and 291 K, under a pressure of 145 bar. Regardless of the combination rules used for σC,O, good agreement was found, except when the correction to the UA model was applied. Thus, further simulations of the sI hydrate were performed with the united-atom model to compare the thermal expansivity to the experiment. A final set of simulations was done with the UA methane model and the three water models, to study the sI hydrate-liquid water-gas coexistence at 80, 230 and 400 bar. The melting temperatures were compared to the experimental values. The results show the need to perform simulations with various different models to attain a reliable and robust molecular image of the systems of interest. PMID:27240339
Simulation gravity modeling to spacecraft-tracking data - Analysis and application
NASA Technical Reports Server (NTRS)
Phillips, R. J.; Sjogren, W. L.; Abbott, E. A.; Zisk, S. H.
1978-01-01
It is proposed that line-of-sight gravity measurements derived from spacecraft-tracking data can be used for quantitative subsurface density modeling by suitable orbit simulation procedures. Such an approach avoids complex dynamic reductions and is analogous to the modeling of conventional surface gravity data. This procedure utilizes the vector calculations of a given gravity model in a simplified trajectory integration program that simulates the line-of-sight gravity. Solutions from an orbit simulation inversion and a dynamic inversion on Doppler observables compare well (within 1% in mass and size), and the error sources in the simulation approximation are shown to be quite small. An application of this technique is made to lunar crater gravity anomalies by simulating the complete Bouguer correction to several large young lunar craters. It is shown that the craters all have negative Bouguer anomalies.
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Axial geometrical aberration correction up to 5th order with N-SYLC.
Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji
2017-11-01
We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.
Bidirectional reflectance function in coastal waters: modeling and validation
NASA Astrophysics Data System (ADS)
Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir
2011-11-01
The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.
Walters, Daniel; Stringer, Simon; Rolls, Edmund
2013-01-01
The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.
Walters, Daniel; Stringer, Simon; Rolls, Edmund
2013-01-01
The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976
NASA Astrophysics Data System (ADS)
Demirel, Mehmet; Moradkhani, Hamid
2015-04-01
Changes in two climate elasticity indices, i.e. temperature and precipitation elasticity of streamflow, were investigated using an ensemble of bias corrected CMIP5 dataset as forcing to two hydrologic models. The Variable Infiltration Capacity (VIC) and the Sacramento Soil Moisture Accounting (SAC-SMA) hydrologic models, were calibrated at 1/16 degree resolution and the simulated streamflow was routed to the basin outlet of interest. We estimated precipitation and temperature elasticity of streamflow from: (1) observed streamflow; (2) simulated streamflow by VIC and SAC-SMA models using observed climate for the current climate (1963-2003); (3) simulated streamflow using simulated climate from 10 GCM - CMIP5 dataset for the future climate (2010-2099) including two concentration pathways (RCP4.5 and RCP8.5) and two downscaled climate products (BCSD and MACA). The streamflow sensitivity to long-term (e.g., 30-year) average annual changes in temperature and precipitation is estimated for three periods i.e. 2010-40, 2040-70 and 2070-99. We compared the results of the three cases to reflect on the value of precipitation and temperature indices to assess the climate change impacts on Columbia River streamflow. Moreover, these three cases for two models are used to assess the effects of different uncertainty sources (model forcing, model structure and different pathways) on the two climate elasticity indices.
NASA Technical Reports Server (NTRS)
Liu, Hong-Yu; Jacob, Daniel J.; Bey, Isabelle; Yantosca, Robert M.
2001-01-01
The atmospheric distributions of the aerosol tracers Pb-210 and Be-7 are simulated with a global three-dimensional model driven by assimilated meteorological observations for 1991-1996 from the NASA Goddard Earth Observing System (GEOSl). The combination of terrigenic Pb-210 and cosmogenic Be-7 provides a sensitive test of wet deposition and vertical transport in the model. Our simulation of moist transport and removal includes scavenging in wet convective updrafts (40% scavenging efficiency per kilometer of updraft), midlevel entrainment and detrainment, first-order rainout and washout from both convective anvils and large-scale precipitation, and cirrus precipitation. Observations from surface sites in specific years are compared to model results for the corresponding meteorological years, and observations from aircraft missions over the Pacific are compared to model results for the days of the flights. Initial simulation of Be-7 showed that cross-tropopause transport in the GEOSl meteorological fields is too fast by a factor of 3-4. We adjusted the stratospheric Be-7 source to correct the tropospheric simulation. Including this correction, we find that the model gives a good simulation of observed Pb-210 and Be-7 concentrations and deposition fluxes at surface sites worldwide, with no significant global bias and with significant success in reproducing the observed latitudinal and seasonal distributions. We achieve several improvements over previous models; in particular, we reproduce the observed Be-7 minimum in the tropics and show that its simulation is sensitive to rainout from convective anvils. Comparisons with aircraft observations up to 12-km altitude suggest that cirrus precipitation could be important for explaining the low concentrations in the middle and upper troposphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzyubak, Oleksandr; Kincaid, Russell; Hertanto, Agung
Purpose: Target localization accuracy of cone-beam CT (CBCT) images used in radiation treatment of respiratory disease sites is affected by motion artifacts (blurring and streaking). The authors have previously reported on a method of respiratory motion correction in thoracic CBCT at end expiration (EE). The previous retrospective study was limited to examination of reducing motion artifacts in a small number of patient cases. They report here on a prospective study in a larger group of lung cancer patients to evaluate respiratory motion-corrected (RMC)-CBCT ability to improve lung tumor localization accuracy and reduce motion artifacts in Linac-mounted CBCT images. A secondmore » study goal examines whether the motion correction derived from a respiration-correlated CT (RCCT) at simulation yields similar tumor localization accuracy at treatment. Methods: In an IRB-approved study, 19 lung cancer patients (22 tumors) received a RCCT at simulation, and on one treatment day received a RCCT, a respiratory-gated CBCT at end expiration, and a 1-min CBCT. A respiration monitor of abdominal displacement was used during all scans. In addition to a CBCT reconstruction without motion correction, the motion correction method was applied to the same 1-min scan. Projection images were sorted into ten bins based on abdominal displacement, and each bin was reconstructed to produce ten intermediate CBCT images. Each intermediate CBCT was deformed to the end expiration state using a motion model derived from RCCT. The deformed intermediate CBCT images were then added to produce a final RMC-CBCT. In order to evaluate the second study goal, the CBCT was corrected in two ways, one using a model derived from the RCCT at simulation [RMC-CBCT(sim)], the other from the RCCT at treatment [RMC-CBCT(tx)]. Image evaluation compared uncorrected CBCT, RMC-CBCT(sim), and RMC-CBCT(tx). The gated CBCT at end expiration served as the criterion standard for comparison. Using automatic rigid image registration, each CBCT was registered twice to the gated CBCT, first aligned to spine, second to tumor in lung. Localization discrepancy was defined as the difference between tumor and spine registration. Agreement in tumor localization with the gated CBCT was further evaluated by calculating a normalized cross correlation (NCC) of pixel intensities within a volume-of-interest enclosing the tumor in lung. Results: Tumor localization discrepancy was reduced with RMC-CBCT(tx) in 17 out of 22 cases relative to no correction. If one considers cases in which tumor motion is 5 mm or more in the RCCT, tumor localization discrepancy is reduced with RMC-CBCT(tx) in 14 out of 17 cases (p = 0.04), and with RMC-CBCT(sim) in 13 out of 17 cases (p = 0.05). Differences in localization discrepancy between correction models [RMC-CBCT(sim) vs RMC-CBCT(tx)] were less than 2 mm. In 21 out of 22 cases, improvement in NCC was higher with RMC-CBCT(tx) relative to no correction (p < 0.0001). Differences in NCC between RMC-CBCT(sim) and RMC-CBCT(tx) were small. Conclusions: Motion-corrected CBCT improves lung tumor localization accuracy and reduces motion artifacts in nearly all cases. Motion correction at end expiration using RCCT acquired at simulation yields similar results to that using a RCCT on the treatment day (2–3 weeks after simulation)« less
Dzyubak, Oleksandr; Kincaid, Russell; Hertanto, Agung; Hu, Yu-Chi; Pham, Hai; Rimner, Andreas; Yorke, Ellen; Zhang, Qinghui; Mageras, Gig S
2014-10-01
Target localization accuracy of cone-beam CT (CBCT) images used in radiation treatment of respiratory disease sites is affected by motion artifacts (blurring and streaking). The authors have previously reported on a method of respiratory motion correction in thoracic CBCT at end expiration (EE). The previous retrospective study was limited to examination of reducing motion artifacts in a small number of patient cases. They report here on a prospective study in a larger group of lung cancer patients to evaluate respiratory motion-corrected (RMC)-CBCT ability to improve lung tumor localization accuracy and reduce motion artifacts in Linac-mounted CBCT images. A second study goal examines whether the motion correction derived from a respiration-correlated CT (RCCT) at simulation yields similar tumor localization accuracy at treatment. In an IRB-approved study, 19 lung cancer patients (22 tumors) received a RCCT at simulation, and on one treatment day received a RCCT, a respiratory-gated CBCT at end expiration, and a 1-min CBCT. A respiration monitor of abdominal displacement was used during all scans. In addition to a CBCT reconstruction without motion correction, the motion correction method was applied to the same 1-min scan. Projection images were sorted into ten bins based on abdominal displacement, and each bin was reconstructed to produce ten intermediate CBCT images. Each intermediate CBCT was deformed to the end expiration state using a motion model derived from RCCT. The deformed intermediate CBCT images were then added to produce a final RMC-CBCT. In order to evaluate the second study goal, the CBCT was corrected in two ways, one using a model derived from the RCCT at simulation [RMC-CBCT(sim)], the other from the RCCT at treatment [RMC-CBCT(tx)]. Image evaluation compared uncorrected CBCT, RMC-CBCT(sim), and RMC-CBCT(tx). The gated CBCT at end expiration served as the criterion standard for comparison. Using automatic rigid image registration, each CBCT was registered twice to the gated CBCT, first aligned to spine, second to tumor in lung. Localization discrepancy was defined as the difference between tumor and spine registration. Agreement in tumor localization with the gated CBCT was further evaluated by calculating a normalized cross correlation (NCC) of pixel intensities within a volume-of-interest enclosing the tumor in lung. Tumor localization discrepancy was reduced with RMC-CBCT(tx) in 17 out of 22 cases relative to no correction. If one considers cases in which tumor motion is 5 mm or more in the RCCT, tumor localization discrepancy is reduced with RMC-CBCT(tx) in 14 out of 17 cases (p = 0.04), and with RMC-CBCT(sim) in 13 out of 17 cases (p = 0.05). Differences in localization discrepancy between correction models [RMC-CBCT(sim) vs RMC-CBCT(tx)] were less than 2 mm. In 21 out of 22 cases, improvement in NCC was higher with RMC-CBCT(tx) relative to no correction (p < 0.0001). Differences in NCC between RMC-CBCT(sim) and RMC-CBCT(tx) were small. Motion-corrected CBCT improves lung tumor localization accuracy and reduces motion artifacts in nearly all cases. Motion correction at end expiration using RCCT acquired at simulation yields similar results to that using a RCCT on the treatment day (2-3 weeks after simulation).
Research on numerical simulation technology about regional important pollutant diffusion of haze
NASA Astrophysics Data System (ADS)
Du, Boying; Ma, Yunfeng; Li, Qiangqiang; Wang, Qi; Hu, Qiongqiong; Bian, Yushan
2018-02-01
In order to analyze the formation of haze in Shenyang and the factors that affect the diffusion of pollutants, the simulation experiment adopted in this paper is based on the numerical model of WRF/CALPUFF coupling. Simulation experiment was conducted to select PM10 of Shenyang City in the period from March 1 to 8, and the PM10 in the regional important haze was simulated. The survey was conducted with more than 120 enterprises section the point of the emission source of this experiment. The contrastive data were analyzed with 11 air quality monitoring points, and the simulation results were compared. Analyze the contribution rate of each typical enterprise to the air quality, verify the correctness of the simulation results, and then use the model to establish the prediction model.
ERIC Educational Resources Information Center
Chen, Yu-Lung; Pan, Pei-Rong; Sung, Yao-Ting; Chang, Kuo-En
2013-01-01
Computer simulation has significant potential as a supplementary tool for effective conceptual-change learning based on the integration of technology and appropriate instructional strategies. This study elucidates misconceptions in learning on diodes and constructs a conceptual-change learning system that incorporates…
Some Fundamental Issues of Mathematical Simulation in Biology
NASA Astrophysics Data System (ADS)
Razzhevaikin, V. N.
2018-02-01
Some directions of simulation in biology leading to original formulations of mathematical problems are overviewed. Two of them are discussed in detail: the correct solvability of first-order linear equations with unbounded coefficients and the construction of a reaction-diffusion equation with nonlinear diffusion for a model of genetic wave propagation.
NASA Astrophysics Data System (ADS)
Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2016-03-01
An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.
A wall interference assessment/correction system
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.
1992-01-01
A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.
An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.
2016-11-01
Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.
Moroz, Tracy; Hapuarachchi, Tharindi; Bainbridge, Alan; Price, David; Cady, Ernest; Baer, Ether; Broad, Kevin; Ezzati, Mojgan; Thomas, David; Golay, Xavier; Robertson, Nicola J; Cooper, Chris E; Tachtsidis, Ilias
2013-01-01
We have developed a computational model to simulate hypoxia-ischaemia (HI) in the neonatal piglet brain. It has been extended from a previous model by adding the simulation of carotid artery occlusion and including pH changes in the cytoplasm. Here, simulations from the model are compared with near-infrared spectroscopy (NIRS) and phosphorus magnetic resonance spectroscopy (MRS) measurements from two piglets during HI and short-term recovery. One of these piglets showed incomplete recovery after HI, and this is modelled by considering some of the cells to be dead. This is consistent with the results from MRS and the redox state of cytochrome-c-oxidase as measured by NIRS. However, the simulations do not match the NIRS haemoglobin measurements. The model therefore predicts that further physiological changes must also be taking place if the hypothesis of dead cells is correct.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Calculation of Coincidence Summing Correction Factors for an HPGe detector using GEANT4.
Giubrone, G; Ortiz, J; Gallardo, S; Martorell, S; Bas, M C
2016-07-01
The aim of this paper was to calculate the True Coincidence Summing Correction Factors (TSCFs) for an HPGe coaxial detector in order to correct the summing effect as a result of the presence of (88)Y and (60)Co in a multigamma source used to obtain a calibration efficiency curve. Results were obtained for three volumetric sources using the Monte Carlo toolkit, GEANT4. The first part of this paper deals with modeling the detector in order to obtain a simulated full energy peak efficiency curve. A quantitative comparison between the measured and simulated values was made across the entire energy range under study. The True Summing Correction Factors were calculated for (88)Y and (60)Co using the full peak efficiencies obtained with GEANT4. This methodology was subsequently applied to (134)Cs, and presented a complex decay scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.
Le Navéaux, Franck; Larson, A Noelle; Labelle, Hubert; Wang, Xiaoyu; Aubin, Carl-Éric
2016-11-01
Optimal implant densities and configurations for thoracic spine instrumentation to treat adolescent idiopathic scoliosis remain unknown. The objective was to computationally assess the biomechanical effects of implant distribution on 3D curve correction and bone-implant forces. 3D patient-specific biomechanical spine models based on a multibody dynamic approach were created for 9 Lenke 1 patients who underwent posterior instrumentation (main thoracic Cobb: 43°-70°). For each case, a factorial design of experiments was used to generate 128 virtual implant configurations representative of existing implant patterns used in clinical practice. All instances except implant configuration were the same for each surgical scenario simulation. Simulation of the 128 implant configurations scenarios (mean implant density=1.32, range: 0.73-2) revealed differences of 2° to 10° in Cobb angle correction, 2° to 7° in thoracic kyphosis and 2° to 7° in apical vertebral rotation. The use of more implants, at the concave side only, was associated with higher Cobb angle correction (r=-0.41 to -0.90). Increased implant density was associated with higher apical vertebral rotation correction for seven cases (r=-0.20 to -0.48). It was also associated with higher bone-screw forces (r=0.22 to 0.64), with an average difference between the least and most constrained instrumentation constructs of 107N per implant at the end of simulated instrumentation. Low-density constructs, with implants mainly placed on the concave side, resulted in similar simulated curve correction as the higher-density patterns. Increasing the number of implants allows for only limited improvement of 3D correction and overconstrains the instrumentation construct, resulting in increased forces on the implants. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chao, Tian-Jy; Kim, Younghun
2015-02-03
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.
NASA Technical Reports Server (NTRS)
Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.;
2014-01-01
The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.
2016-05-01
The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Hauptmann, S.; Bülk, M.; Schön, L.; Erbslöh, S.; Boorsma, K.; Grasso, F.; Kühn, M.; Cheng, P. W.
2014-12-01
Design load simulations for wind turbines are traditionally based on the blade- element-momentum theory (BEM). The BEM approach is derived from a simplified representation of the rotor aerodynamics and several semi-empirical correction models. A more sophisticated approach to account for the complex flow phenomena on wind turbine rotors can be found in the lifting-line free vortex wake method. This approach is based on a more physics based representation, especially for global flow effects. This theory relies on empirical correction models only for the local flow effects, which are associated with the boundary layer of the rotor blades. In this paper the lifting-line free vortex wake method is compared to a state- of-the-art BEM formulation with regard to aerodynamic and aeroelastic load simulations of the 5MW UpWind reference wind turbine. Different aerodynamic load situations as well as standardised design load cases that are sensitive to the aeroelastic modelling are evaluated in detail. This benchmark makes use of the AeroModule developed by ECN, which has been coupled to the multibody simulation code SIMPACK.
NASA Astrophysics Data System (ADS)
Diokhane, Aminata Mbow; Jenkins, Gregory S.; Manga, Noel; Drame, Mamadou S.; Mbodji, Boubacar
2016-04-01
The Sahara desert transports large quantities of dust over the Sahelian region during the Northern Hemisphere winter and spring seasons (December-April). In episodic events, high dust concentrations are found at the surface, negatively impacting respiratory health. Bacterial meningitis in particular is known to affect populations that live in the Sahelian zones, which is otherwise known as the meningitis belt. During the winter and spring of 2012, suspected meningitis cases (SMCs) were with three times higher than in 2013. We show higher surface particular matter concentrations at Dakar, Senegal and elevated atmospheric dust loading in Senegal for the period of 1 January-31 May during 2012 relative to 2013. We analyze simulated particulate matter over Senegal from the Weather Research and Forecasting (WRF) model during 2012 and 2013. The results show higher simulated dust concentrations during the winter season of 2012 for Senegal. The WRF model correctly captures the large dust events from 1 January-31 March but has shown less skill during April and May for simulated dust concentrations. The results also show that the boundary conditions are the key feature for correctly simulating large dust events and initial conditions are less important.
NASA Astrophysics Data System (ADS)
Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.
2014-02-01
The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future PET scanners.
2010-01-01
We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002)]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small. PMID:20672062
Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V
2012-07-01
The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.
Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas
2014-01-01
A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.
Pagès, Loïc; Picon-Cochard, Catherine
2014-10-01
Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
NASA Astrophysics Data System (ADS)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
2D Quantum Mechanical Study of Nanoscale MOSFETs
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, B.; Kwak, Dochan (Technical Monitor)
2000-01-01
With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density-gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. We present the results of our simulations of MIT 25, 50 and 90 nm "well-tempered" MOSFETs and compare them to those of classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. Surprisingly, the self-consistent potential profile shows lower injection barrier in the channel in quantum case. These results are qualitatively consistent with ID Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and subthreshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.
Correction of aeroheating-induced intensity nonuniformity in infrared images
NASA Astrophysics Data System (ADS)
Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu
2016-05-01
Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.
NASA Astrophysics Data System (ADS)
Sangelantoni, Lorenzo; Russo, Aniello; Gennaretti, Fabio
2018-02-01
Quantile mapping (QM) represents a common post-processing technique used to connect climate simulations to impact studies at different spatial scales. Depending on the simulation-observation spatial scale mismatch, QM can be used for two different applications. The first application uses only the bias correction component, establishing transfer functions between observations and simulations at similar spatial scales. The second application includes a statistical downscaling component when point-scale observations are considered. However, knowledge of alterations to climate change signal (CCS) resulting from these two applications is limited. This study investigates QM impacts on the original temperature and precipitation CCSs when applied according to a bias correction only (BC-only) and a bias correction plus downscaling (BC + DS) application over reference stations in Central Italy. BC-only application is used to adjust regional climate model (RCM) simulations having the same resolution as the observation grid. QM BC + DS application adjusts the same simulations to point-wise observations. QM applications alter CCS mainly for temperature. BC-only application produces a CCS of the median 1 °C lower than the original ( 4.5 °C). BC + DS application produces CCS closer to the original, except over the summer 95th percentile, where substantial amplification of the original CCS resulted. The impacts of the two applications are connected to the ratio between the observed and the simulated standard deviation (STD) of the calibration period. For the precipitation, original CCS is essentially preserved in both applications. Yet, calibration period STD ratio cannot predict QM impact on the precipitation CCS when simulated STD and mean are similarly misrepresented.
Analysis and modeling of leakage current sensor under pulsating direct current
NASA Astrophysics Data System (ADS)
Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo
2017-05-01
In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.
Impact of chlorophyll bias on the tropical Pacific mean climate in an earth system model
NASA Astrophysics Data System (ADS)
Lim, Hyung-Gyu; Park, Jong-Yeon; Kug, Jong-Seong
2017-12-01
Climate modeling groups nowadays develop earth system models (ESMs) by incorporating biogeochemical processes in their climate models. The ESMs, however, often show substantial bias in simulated marine biogeochemistry which can potentially introduce an undesirable bias in physical ocean fields through biogeophysical interactions. This study examines how and how much the chlorophyll bias in a state-of-the-art ESM affects the mean and seasonal cycle of tropical Pacific sea-surface temperature (SST). The ESM used in the present study shows a sizeable positive bias in the simulated tropical chlorophyll. We found that the correction of the chlorophyll bias can reduce the ESM's intrinsic cold SST mean bias in the equatorial Pacific. The biologically-induced cold SST bias is strongly affected by seasonally-dependent air-sea coupling strength. In addition, the correction of chlorophyll bias can improve the annual cycle of SST by up to 25%. This result suggests a possible modeling approach in understanding the two-way interactions between physical and chlorophyll biases by biogeophysical effects.
Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.
Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold
2002-02-01
The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).
Chiral symmetry constraints on resonant amplitudes
NASA Astrophysics Data System (ADS)
Bruns, Peter C.; Mai, Maxim
2018-03-01
We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.
Coordinate Conversion Technique for OTH Backscatter Radar
1977-05-01
obliquity of the earth’s equator (=23.0), A is the mean longitude of the sun measured in the ecliptic counterclockwise from the first point of...MODEL FOR Fo-LAYER CORRECTION FACTORS-VERTICAL IO NO GRAM 11. MODEL FOR Fg-LAYER CORRECTION FACTORS- OBLIQUE IO NO GRAM 12. ELEMENTS OF COMMON BLOCK...simulation in (1) to a given oblique ionogram generate range gradient factors to apply to f F9 and I\\1(3000)F„ to force agreement; (3) from the
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
NASA Astrophysics Data System (ADS)
Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.
2012-12-01
Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Yuska, J. A.
1972-01-01
The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.
2014-01-01
Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
2017-10-19
Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis bymore » developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.« less
NASA Astrophysics Data System (ADS)
Louarn, K.; Claveau, Y.; Hapiuk, D.; Fontaine, C.; Arnoult, A.; Taliercio, T.; Licitra, C.; Piquemal, F.; Bounouh, A.; Cavassilas, N.; Almuneau, G.
2017-09-01
The aim of this study is to investigate the impact of multiband corrections on the current density in GaAs tunnel junctions (TJs) calculated with a refined yet simple semi-classical interband tunneling model (SCITM). The non-parabolicity of the considered bands and the spin-orbit effects are considered by using a recently revisited SCITM available in the literature. The model is confronted to experimental results from a series of molecular beam epitaxy grown GaAs TJs and to numerical results obtained with a full quantum model based on the non-equilibrium Green’s function formalism and a 6-band k.p Hamiltonian. We emphasize the importance of considering the non-parabolicity of the conduction band by two different measurements of the energy-dependent electron effective mass in N-doped GaAs. We also propose an innovative method to compute the non-uniform electric field in the TJ for the SCITM simulations, which is of prime importance for a successful operation of the model. We demonstrate that, when considering the multiband corrections and this new computation of the non-uniform electric field, the SCITM succeeds in predicting the electrical characteristics of GaAs TJs, and are also in agreement with the quantum model. Besides the fundamental study of the tunneling phenomenon in TJs, the main benefit of this SCITM is that it can be easily embedded into drift-diffusion software, which are the most widely-used simulation tools for electronic and opto-electronic devices such as multi-junction solar cells, tunnel field-effect transistors, or vertical-cavity surface-emitting lasers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
NASA Astrophysics Data System (ADS)
Chadburn, Sarah E.; Krinner, Gerhard; Porada, Philipp; Bartsch, Annett; Beer, Christian; Belelli Marchesini, Luca; Boike, Julia; Ekici, Altug; Elberling, Bo; Friborg, Thomas; Hugelius, Gustaf; Johansson, Margareta; Kuhry, Peter; Kutzbach, Lars; Langer, Moritz; Lund, Magnus; Parmentier, Frans-Jan W.; Peng, Shushi; Van Huissteden, Ko; Wang, Tao; Westermann, Sebastian; Zhu, Dan; Burke, Eleanor J.
2017-11-01
It is important that climate models can accurately simulate the terrestrial carbon cycle in the Arctic due to the large and potentially labile carbon stocks found in permafrost-affected environments, which can lead to a positive climate feedback, along with the possibility of future carbon sinks from northward expansion of vegetation under climate warming. Here we evaluate the simulation of tundra carbon stocks and fluxes in three land surface schemes that each form part of major Earth system models (JSBACH, Germany; JULES, UK; ORCHIDEE, France). We use a site-level approach in which comprehensive, high-frequency datasets allow us to disentangle the importance of different processes. The models have improved physical permafrost processes and there is a reasonable correspondence between the simulated and measured physical variables, including soil temperature, soil moisture and snow. We show that if the models simulate the correct leaf area index (LAI), the standard C3 photosynthesis schemes produce the correct order of magnitude of carbon fluxes. Therefore, simulating the correct LAI is one of the first priorities. LAI depends quite strongly on climatic variables alone, as we see by the fact that the dynamic vegetation model can simulate most of the differences in LAI between sites, based almost entirely on climate inputs. However, we also identify an influence from nutrient limitation as the LAI becomes too large at some of the more nutrient-limited sites. We conclude that including moss as well as vascular plants is of primary importance to the carbon budget, as moss contributes a large fraction to the seasonal CO2 flux in nutrient-limited conditions. Moss photosynthetic activity can be strongly influenced by the moisture content of moss, and the carbon uptake can be significantly different from vascular plants with a similar LAI. The soil carbon stocks depend strongly on the rate of input of carbon from the vegetation to the soil, and our analysis suggests that an improved simulation of photosynthesis would also lead to an improved simulation of soil carbon stocks. However, the stocks are also influenced by soil carbon burial (e.g. through cryoturbation) and the rate of heterotrophic respiration, which depends on the soil physical state. More detailed below-ground measurements are needed to fully evaluate biological and physical soil processes. Furthermore, even if these processes are well modelled, the soil carbon profiles cannot resemble peat layers as peat accumulation processes are not represented in the models. Thus, we identify three priority areas for model development: (1) dynamic vegetation including (a) climate and (b) nutrient limitation effects; (2) adding moss as a plant functional type; and an (3) improved vertical profile of soil carbon including peat processes.
Time Advice and Learning Questions in Computer Simulations
ERIC Educational Resources Information Center
Rey, Gunter Daniel
2011-01-01
Students (N = 101) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without time advice) x 3 (with learning questions and corrective feedback, with…
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
Thread scheduling for GPU-based OPC simulation on multi-thread
NASA Astrophysics Data System (ADS)
Lee, Heejun; Kim, Sangwook; Hong, Jisuk; Lee, Sooryong; Han, Hwansoo
2018-03-01
As semiconductor product development based on shrinkage continues, the accuracy and difficulty required for the model based optical proximity correction (MBOPC) is increasing. OPC simulation time, which is the most timeconsuming part of MBOPC, is rapidly increasing due to high pattern density in a layout and complex OPC model. To reduce OPC simulation time, we attempt to apply graphic processing unit (GPU) to MBOPC because OPC process is good to be programmed in parallel. We address some issues that may typically happen during GPU-based OPC simulation in multi thread system, such as "out of memory" and "GPU idle time". To overcome these problems, we propose a thread scheduling method, which manages OPC jobs in multiple threads in such a way that simulations jobs from multiple threads are alternatively executed on GPU while correction jobs are executed at the same time in each CPU cores. It was observed that the amount of GPU peak memory usage decreases by up to 35%, and MBOPC runtime also decreases by 4%. In cases where out of memory issues occur in a multi-threaded environment, the thread scheduler was used to improve MBOPC runtime up to 23%.
NASA Astrophysics Data System (ADS)
Tang, Xian-Zhu; McDevitt, C. J.; Guo, Zehua; Berk, H. L.
2014-03-01
Inertial confinement fusion requires an imploded target in which a central hot spot is surrounded by a cold and dense pusher. The hot spot/pusher interface can take complicated shape in three dimensions due to hydrodynamic mix. It is also a transition region where the Knudsen and inverse Knudsen layer effect can significantly modify the fusion reactivity in comparison with the commonly used value evaluated with background Maxwellians. Here, we describe a hybrid model that couples the kinetic correction of fusion reactivity to global hydrodynamic implosion simulations. The key ingredient is a non-perturbative treatment of the tail ions in the interface region where the Gamow ion Knudsen number approaches or surpasses order unity. The accuracy of the coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space.
NASA Astrophysics Data System (ADS)
Macedo, M.; Panday, P. K.; Coe, M. T.; Lefebvre, P.; Castello, L.
2015-12-01
The Amazonian floodplains and wetlands cover one fifth of the basin and are highly productive promoting diverse biological communities and sustaining human populations with fisheries. Seasonal inundation of the floodplains fluctuates in response to drought or extreme rainfall as observed in the recent droughts of 2005 and 2010 where river levels dropped to among the lowest recorded. We model and evaluate the historical (1940-2010) and projected future (2010-2100) impacts of droughts and floods on the floodplain hydrology and inundation dynamics in the central Amazon using the Integrated Biosphere Simulator (IBIS) and the Terrestrial Hydrology Model and Biogeochemistry (THMB). Simulated discharge correlates well with observed discharges for tributaries originating in Brazil but underestimates basins draining regions in the non-Brazilian Amazon (Solimões, Japuŕa, Madeira, and Negro) by greater than 30%. A volume bias-correction from the simulated and observed runoff was used to correct the input precipitation across the major tributaries of the Amazon basin that drain the Andes. Simulated hydrological parameters (discharge, inundated area and river height) using corrected precipitation has a strong correlation with field measured discharge at gauging stations, surface water extent data (Global Inundation Extent from Multi-Satellites (GIEMS) and NASA Earth System Data Records (ESDRs) for inundation), and satellite radar altimetry (TOPEX/POSEIDON altimeter data for 1992-1998 and ENVISAT data for 2002-2010). We also used an ensemble of model outputs participating in the IPCC AR5 to drive two sets of simulations with and without carbon dioxide fertilization for the 2006-2100 period, and evaluated the potential scale and variability of future changes in discharge and inundation dynamics due to the influences of climate change and vegetation response to carbon dioxide fertilization. Preliminary modeled results for future scenarios using Representative Concentration Pathways (RCP) 4.5 indicate decreases in projected discharge and extent of inundated area on the mainstem Amazon by the late 21st century owing to influences of future climate change only.
A Multi-Wavelength Thermal Infrared and Reflectance Scene Simulation Model
NASA Technical Reports Server (NTRS)
Ballard, J. R., Jr.; Smith, J. A.; Smith, David E. (Technical Monitor)
2002-01-01
Several theoretical calculations are presented and our approach discussed for simulating overall composite scene thermal infrared exitance and canopy bidirectional reflectance of a forest canopy. Calculations are performed for selected wavelength bands of the DOE Multispectral Thermal Imagery and comparisons with atmospherically corrected MTI imagery are underway. NASA EO-1 Hyperion observations also are available and the favorable comparison of our reflective model results with these data are reported elsewhere.
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Tian-Jy; Kim, Younghun
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less
Voidage correction algorithm for unresolved Euler-Lagrange simulations
NASA Astrophysics Data System (ADS)
Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan
2018-04-01
The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.
Temporal integration property of stereopsis after higher-order aberration correction
Kang, Jian; Dai, Yun; Zhang, Yudong
2015-01-01
Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010
OMV: A simplified mathematical model of the orbital maneuvering vehicle
NASA Technical Reports Server (NTRS)
Teoh, W.
1984-01-01
A model of the orbital maneuvering vehicle (OMV) is presented which contains several simplications. A set of hand controller signals may be used to control the motion of the OMV. Model verification is carried out using a sequence of tests. The dynamic variables generated by the model are compared, whenever possible, with the corresponding analytical variables. The results of the tests show conclusively that the present model is behaving correctly. Further, this model interfaces properly with the state vector transformation module (SVX) developed previously. Correct command sentence sequences are generated by the OMV and and SVX system, and these command sequences can be used to drive the flat floor simulation system at MSFC.
NASA Astrophysics Data System (ADS)
Takhsha, Maryam; Nikiéma, Oumarou; Lucas-Picher, Philippe; Laprise, René; Hernández-Díaz, Leticia; Winger, Katja
2017-10-01
As part of the CORDEX project, the fifth-generation Canadian Regional Climate Model (CRCM5) is used over the Arctic for climate simulations driven by reanalyses and by the MPI-ESM-MR coupled global climate model (CGCM) under the RCP8.5 scenario. The CRCM5 shows adequate skills capturing general features of mean sea level pressure (MSLP) for all seasons. Evaluating 2-m temperature (T2m) and precipitation is more problematic, because of inconsistencies between observational reference datasets over the Arctic that suffer of a sparse distribution of weather stations. In our study, we additionally investigated the effect of large-scale spectral nudging (SN) on the hindcast simulation driven by reanalyses. The analysis shows that SN is effective in reducing the spring MSLP bias, but otherwise it has little impact. We have also conducted another experiment in which the CGCM-simulated sea-surface temperature (SST) is empirically corrected and used as lower boundary conditions over the ocean for an atmosphere-only global simulation (AGCM), which in turn provides the atmospheric lateral boundary conditions to drive the CRCM5 simulation. This approach, so-called 3-step approach of dynamical downscaling (CGCM-AGCM-RCM), which had considerably improved the CRCM5 historical simulations over Africa, exhibits reduced impact over the Arctic domain. The most notable positive effect over the Arctic is a reduction of the T2m bias over the North Pacific Ocean and the North Atlantic Ocean in all seasons. Future projections using this method are compared with the results obtained with the traditional 2-step dynamical downscaling (CGCM-RCM) to assess the impact of correcting systematic biases of SST upon future-climate projections. The future projections are mostly similar for the two methods, except for precipitation.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
The NASA Lewis integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1991-01-01
A new flight simulation facility has been developed at NASA Lewis to allow integrated propulsion-control and flight-control algorithm development and evaluation in real time. As a preliminary check of the simulator facility and the correct integration of its components, the control design and physics models for an STOVL fighter aircraft model have been demonstrated, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The results show that this fixed-based flight simulator can provide real-time feedback and display of both airframe and propulsion variables for validation of integrated systems and testing of control design methodologies and cockpit mechanizations.
Finite element simulation of crack depth measurements in concrete using diffuse ultrasound
NASA Astrophysics Data System (ADS)
Seher, Matthias; Kim, Jin-Yeon; Jacobs, Laurence J.
2012-05-01
This research simulates the measurements of crack depth in concrete using diffuse ultrasound. The finite element method is employed to simulate the ultrasonic diffusion process around cracks with different geometrical shapes, with the goal of gaining physical insight into the data obtained from experimental measurements. The commercial finite element software Ansys is used to implement the two-dimensional concrete model. The model is validated with an analytical solution and experimental results. It is found from the simulation results that preliminary knowledge of the crack geometry is required to interpret the energy evolution curves from measurements and to correctly determine the crack depth.
O'Mahony, James F; Newall, Anthony T; van Rosmalen, Joost
2015-12-01
Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advice on the correct conversion of transition probabilities in state transition models. Finally, for discounting, analysts should not only follow current guidance and report how discounting was conducted, especially in the case of differential discounting, but also seek to develop an understanding of its rationale. Our overall recommendations are that analysts explicitly state and justify their modelling choices regarding time and consider how alternative choices may impact on results.
Analytic model for ring pattern formation by bacterial swarmers
NASA Astrophysics Data System (ADS)
Arouh, Scott
2001-03-01
We analyze a model proposed by Medvedev, Kaper, and Kopell (the MKK model) for ring formation in two-dimensional bacterial colonies of Proteus mirabilis. We correct the model to formally include a feature crucial of the ring generation mechanism: a bacterial density threshold to the nonlinear diffusivity of the MKK model. We numerically integrate the model equations, and observe the logarithmic profiles of the bacterial densities near the front. These lead us to define a consolidation front distinct from the colony radius. We find that this consolidation front propagates outward toward the colony radius with a nearly constant velocity. We then implement the corrected MKK equations in two dimensions and compare our results with biological experiment. Our numerical results indicate that the two-dimensional corrected MKK model yields smooth (rather than branched) rings, and that colliding colonies merge if grown in phase but not if grown out of phase. We also introduce a model, based on coupling the MKK model to a nutrient field, for simulating experimentally observed branched rings.
Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin
2016-09-02
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method.
NASA Astrophysics Data System (ADS)
Krishnamurthy, Lakshmi; Muñoz, Ángel G.; Vecchi, Gabriel A.; Msadek, Rym; Wittenberg, Andrew T.; Stern, Bill; Gudgel, Rich; Zeng, Fanrong
2018-05-01
The Caribbean low-level jet (CLLJ) is an important component of the atmospheric circulation over the Intra-Americas Sea (IAS) which impacts the weather and climate both locally and remotely. It influences the rainfall variability in the Caribbean, Central America, northern South America, the tropical Pacific and the continental Unites States through the transport of moisture. We make use of high-resolution coupled and uncoupled models from the Geophysical Fluid Dynamics Laboratory (GFDL) to investigate the simulation of the CLLJ and its teleconnections and further compare with low-resolution models. The high-resolution coupled model FLOR shows improvements in the simulation of the CLLJ and its teleconnections with rainfall and SST over the IAS compared to the low-resolution coupled model CM2.1. The CLLJ is better represented in uncoupled models (AM2.1 and AM2.5) forced with observed sea-surface temperatures (SSTs), emphasizing the role of SSTs in the simulation of the CLLJ. Further, we determine the forecast skill for observed rainfall using both high- and low-resolution predictions of rainfall and SSTs for the July-August-September season. We determine the role of statistical correction of model biases, coupling and horizontal resolution on the forecast skill. Statistical correction dramatically improves area-averaged forecast skill. But the analysis of spatial distribution in skill indicates that the improvement in skill after statistical correction is region dependent. Forecast skill is sensitive to coupling in parts of the Caribbean, Central and northern South America, and it is mostly insensitive over North America. Comparison of forecast skill between high and low-resolution coupled models does not show any dramatic difference. However, uncoupled models show improvement in the area-averaged skill in the high-resolution atmospheric model compared to lower resolution model. Understanding and improving the forecast skill over the IAS has important implications for highly vulnerable nations in the region.
Computation of misalignment and primary mirror astigmatism figure error of two-mirror telescopes
NASA Astrophysics Data System (ADS)
Gu, Zhiyuan; Wang, Yang; Ju, Guohao; Yan, Changxiang
2018-01-01
Active optics usually uses the computation models based on numerical methods to correct misalignments and figure errors at present. These methods can hardly lead to any insight into the aberration field dependencies that arise in the presence of the misalignments. An analytical alignment model based on third-order nodal aberration theory is presented for this problem, which can be utilized to compute the primary mirror astigmatic figure error and misalignments for two-mirror telescopes. Alignment simulations are conducted for an R-C telescope based on this analytical alignment model. It is shown that in the absence of wavefront measurement errors, wavefront measurements at only two field points are enough, and the correction process can be completed with only one alignment action. In the presence of wavefront measurement errors, increasing the number of field points for wavefront measurements can enhance the robustness of the alignment model. Monte Carlo simulation shows that, when -2 mm ≤ linear misalignment ≤ 2 mm, -0.1 deg ≤ angular misalignment ≤ 0.1 deg, and -0.2 λ ≤ astigmatism figure error (expressed as fringe Zernike coefficients C5 / C6, λ = 632.8 nm) ≤0.2 λ, the misaligned systems can be corrected to be close to nominal state without wavefront testing error. In addition, the root mean square deviation of RMS wavefront error of all the misaligned samples after being corrected is linearly related to wavefront testing error.
Atmospheric icing of structures: Observations and simulations
NASA Astrophysics Data System (ADS)
Ágústsson, H.; Elíasson, Á. J.; Thorsteins, E.; Rögnvaldsson, Ó.; Ólafsson, H.
2012-04-01
This study compares observed icing in a test span in complex orography at Hallormsstaðaháls (575 m) in East-Iceland with parameterized icing based on an icing model and dynamically downscaled weather at high horizontal resolution. Four icing events have been selected from an extensive dataset of observed atmospheric icing in Iceland. A total of 86 test-spans have been erected since 1972 at 56 locations in complex terrain with more than 1000 icing events documented. The events used here have peak observed ice load between 4 and 36 kg/m. Most of the ice accretion is in-cloud icing but it may partly be mixed with freezing drizzle and wet snow icing. The calculation of atmospheric icing is made in two steps. First the atmospheric data is created by dynamically downscaling the ECMWF-analysis to high resolution using the non-hydrostatic mesoscale Advanced Research WRF-model. The horizontal resolution of 9, 3, 1 and 0.33 km is necessary to allow the atmospheric model to reproduce correctly local weather in the complex terrain of Iceland. Secondly, the Makkonen-model is used to calculate the ice accretion rate on the conductors based on the simulated temperature, wind, cloud and precipitation variables from the atmospheric data. In general, the atmospheric model correctly simulates the atmospheric variables and icing calculations based on the atmospheric variables correctly identify the observed icing events, but underestimate the load due to too slow ice accretion. This is most obvious when the temperature is slightly below 0°C and the observed icing is most intense. The model results improve significantly when additional observations of weather from an upstream weather station are used to nudge the atmospheric model. However, the large variability in the simulated atmospheric variables results in high temporal and spatial variability in the calculated ice accretion. Furthermore, there is high sensitivity of the icing model to the droplet size and the possibility that some of the icing may be due to freezing drizzle or wet snow instead of in-cloud icing of super-cooled droplets. In addition, the icing model (Makkonen) may not be accurate for the highest icing loads observed.
An oil-based model of inhalation anesthetic uptake and elimination.
Loughlin, P J; Bowes, W A; Westenskow, D R
1989-08-01
An oil-based model was developed as a physical simulation of inhalation anesthetic uptake and elimination. It provides an alternative to animal models in testing the performance of anesthesia equipment. A 7.5-1 water-filled manometer simulates pulmonary mechanics. Nitrogen and carbon dioxide flowing into the manometer simulate oxygen consumption and carbon dioxide production. Oil-filled chambers (180 ml and 900 ml) simulate the uptake and washout of halothane by the vessel-rich and muscle tissue groups. A 17.2-1 air-filled chamber simulates uptake by the lung group. Gas circulates through the chambers (3.7, 13.8, and 25 l/min) to simulate the transport of anesthetic to the tissues by the circulatory system. Results show that during induction and washout, the rate of rise in endtidal halothane fraction simulated by the model parallels that measured in patients. The model's end-tidal fraction changes correctly with changes in cardiac output and alveolar ventilation. The model has been used to test anesthetic controllers and to evaluate gas sensors, and should be useful in teaching principles underlying volatile anesthetic uptake.
Modeling of Adaptive Optics-Based Free-Space Communications Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilks, S C; Morris, J R; Brase, J M
2002-08-06
We introduce a wave-optics based simulation code written for air-optic laser communications links, that includes a detailed model of an adaptive optics compensation system. We present the results obtained by this model, where the phase of a communications laser beam is corrected, after it propagates through a turbulent atmosphere. The phase of the received laser beam is measured using a Shack-Hartmann wavefront sensor, and the correction method utilizes a MEMS mirror. Strehl improvement and amount of power coupled to the receiving fiber for both 1 km horizontal and 28 km slant paths are presented.
Remotely sensed soil moisture input to a hydrologic model
NASA Technical Reports Server (NTRS)
Engman, E. T.; Kustas, W. P.; Wang, J. R.
1989-01-01
The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.
A physical-based gas-surface interaction model for rarefied gas flow simulation
NASA Astrophysics Data System (ADS)
Liang, Tengfei; Li, Qi; Ye, Wenjing
2018-01-01
Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solares, Santiago D.
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Evaluating CMIP5 Simulations of Historical Continental Climate with Koeppen Bioclimatic Metrics
NASA Astrophysics Data System (ADS)
Phillips, T. J.; Bonfils, C.
2013-12-01
The classic Koeppen bioclimatic classification scheme associates generic vegetation types (e.g. grassland, tundra, broadleaf or evergreen forests, etc.) with regional climate zones defined by their annual cycles of continental temperature (T) and precipitation (P), considered together. The locations or areas of Koeppen vegetation types derived from observational data thus can provide concise metrical standards for simultaneously evaluating climate simulations of T and P in naturally defined regions. The CMIP5 models' collective ability to correctly represent two variables that are critically important for living organisms at regional scales is therefore central to this evaluation. For this study, 14 Koeppen vegetation types are derived from annual-cycle climatologies of T and P in some 3 dozen CMIP5 simulations of the 1980-1999 period. Metrics for evaluating the ability of the CMIP5 models to simulate the correct locations and areas of each vegetation type, as well as measures of overall model performance, also are developed. It is found that the CMIP5 models are generally most deficient in simulating: 1) climates of drier Koeppen zones (e.g. desert, savanna, grassland, steppe vegetation types) located in the southwestern U.S. and Mexico, eastern Europe, southern Africa, and central Australia; 2) climates of regions such as central Asia and western South America where topography plays a key role. Details of regional T or P biases in selected simulations that exemplify general model performance problems also will be presented. Acknowledgments: This work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Map of Koeppen vegetation types derived from observed T and P.
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...
2017-08-12
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
Underestimated interannual variability of East Asian summer rainfall under climate change
NASA Astrophysics Data System (ADS)
Ren, Yongjian; Song, Lianchun; Xiao, Ying; Du, Liangmin
2018-02-01
This study evaluates the performance of climate models in simulating the climatological mean and interannual variability of East Asian summer rainfall (EASR) using Coupled Model Intercomparison Project Phase 5 (CMIP5). Compared to the observation, the interannual variability of EASR during 1979-2005 is underestimated by the CMIP5 with a range of 0.86 16.08%. Based on bias correction of CMIP5 simulations with historical data, the reliability of future projections will be enhanced. The corrected EASR under representative concentration pathways (RCPs) 4.5 and 8.5 increases by 5.6 and 7.5% during 2081-2100 relative to the baseline of 1986-2005, respectively. After correction, the areas with both negative and positive anomalies decrease, which are mainly located in the South China Sea and central China, and southern China and west of the Philippines, separately. In comparison to the baseline, the interannual variability of EASR increases by 20.8% under RCP4.5 but 26.2% under RCP8.5 in 2006-2100, which is underestimated by 10.7 and 11.1% under both RCPs in the original CMIP5 simulation. Compared with the mean precipitation, the interannual variability of EASR is notably larger under global warming. Thus, the probabilities of floods and droughts may increase in the future.
Modeling, Simulation, and Analysis of a Decoy State Enabled Quantum Key Distribution System
2015-03-26
through the fiber , we assume Alice and Bob have correct basis alignment and timing control for reference frame correction and precise photon detection...optical components ( laser , polarization modulator, electronic variable optical attenuator, fixed optical attenuator, fiber channel, beamsplitter...generated by the laser in the CPG propagate through multiple optical components, each with a unique propagation delay before reaching the OPM. Timing
NASA Astrophysics Data System (ADS)
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.
2017-07-01
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.
NASA Astrophysics Data System (ADS)
Marchese, Linda E.; Munger, Rejean; Priest, David
2005-08-01
Wavefront-guided laser eye surgery has been recently introduced and holds the promise of correcting not only defocus and astigmatism in patients but also higher-order aberrations. Research is just beginning on the implementation of wavefront-guided methods in optical solutions, such as phase-plate-based spectacles, as alternatives to surgery. We investigate the theoretical differences between the implementation of wavefront-guided surgical and phase plate corrections. The residual aberrations of 43 model eyes are calculated after simulated refractive surgery and also after a phase plate is placed in front of the untreated eye. In each case, the current wavefront-guided paradigm that applies a direct map of the ocular aberrations to the correction zone is used. The simulation results demonstrate that an ablation map that is a Zernike fit of a direct transform of the ocular wavefront phase error is not as efficient in correcting refractive errors of sphere, cylinder, spherical aberration, and coma as when the same Zernike coefficients are applied to a phase plate, with statistically significant improvements from 2% to 6%.
A Retrieval Model for Both Recognition and Recall.
ERIC Educational Resources Information Center
Gillund, Gary; Shiffrin, Richard M.
1984-01-01
The Search of Associative Memory (SAM) model for recall is extended by assuming that a familiarity process is used for recognition. The model, formalized in a computer simulation program, correctly predicts a number of findings in the literature as well as results from an experiment on the word-frequency effect. (Author/BW)
NASA Astrophysics Data System (ADS)
Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.
2017-12-01
An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.
ExoMars Entry Demonstrator Module Dynamic Stability
NASA Astrophysics Data System (ADS)
Dormieux, Marc; Gulhan, Ali; Berner, Claude
2011-05-01
In the frame of ExoMars DM aerodynamics characterization, pitch damping derivatives determination is required as it drives the parachute deployment conditions. Series of free-flight and free- oscillation tests (captive model) have been conducted with particular attention for data reduction. 6 Degrees- of-Freedom (DoF) analysis tools require the knowledge of local damping derivatives. In general ground tests do not provide them directly but only effective damping derivatives. Free-flight (ballistic range) tests with full oscillations around trim angle have been performed at ISL for 0.5
Simulation of absolute amplitudes of ultrasound signals using equivalent circuits.
Johansson, Jonny; Martinsson, Pär-Erik; Delsing, Jerker
2007-10-01
Equivalent circuits for piezoelectric devices and ultrasonic transmission media can be used to cosimulate electronics and ultrasound parts in simulators originally intended for electronics. To achieve efficient system-level optimization, it is important to simulate correct, absolute amplitude of the ultrasound signal in the system, as this determines the requirements on the electronics regarding dynamic range, circuit noise, and power consumption. This paper presents methods to achieve correct, absolute amplitude of an ultrasound signal in a simulation of a pulse-echo system using equivalent circuits. This is achieved by taking into consideration loss due to diffraction and the effect of the cable that connects the electronics and the piezoelectric transducer. The conductive loss in the transmission line that models the propagation media of the ultrasound pulse is used to model the loss due to diffraction. Results show that the simulated amplitude of the echo follows measured values well in both near and far fields, with an offset of about 10%. The use of a coaxial cable introduces inductance and capacitance that affect the amplitude of a received echo. Amplitude variations of 60% were observed when the cable length was varied between 0.07 m and 2.3 m, with simulations predicting similar variations. The high precision in the achieved results show that electronic design and system optimization can rely on system simulations alone. This will simplify the development of integrated electronics aimed at ultrasound systems.
NASA Astrophysics Data System (ADS)
Krauze, A.; Virbulis, J.; Kravtsov, A.
2018-05-01
A beam glow discharge based electron gun can be applied as heater for silicon crystal growth systems in which silicon rods are pulled from melt. Impacts of high-energy charged particles cause wear and tear of the gun and generate an additional source of silicon contamination. A steady-state model for electron beam formation has been developed to model the electron gun and optimize its design. Description of the model and first simulation results are presented. It has been shown that the model can simulate dimensions of particle impact areas on the cathode and anode, but further improvements of the model are needed to correctly simulate electron trajectory distribution in the beam and the beam current dependence on the applied gas pressure.
Liu, Kai; Kokubo, Hironori
2017-10-23
Docking has become an indispensable approach in drug discovery research to predict the binding mode of a ligand. One great challenge in docking is to efficiently refine the correct pose from various putative docking poses through scoring functions. We recently examined the stability of self-docking poses under molecular dynamics (MD) simulations and showed that equilibrium MD simulations have some capability to discriminate between correct and decoy poses. Here, we have extended our previous work to cross-docking studies for practical applications. Three target proteins (thrombin, heat shock protein 90-alpha, and cyclin-dependent kinase 2) of pharmaceutical interest were selected. Three comparable poses (one correct pose and two decoys) for each ligand were then selected from the docking poses. To obtain the docking poses for the three target proteins, we used three different protocols, namely: normal docking, induced fit docking (IFD), and IFD against the homology model. Finally, five parallel MD equilibrium runs were performed on each pose for the statistical analysis. The results showed that the correct poses were generally more stable than the decoy poses under MD. The discrimination capability of MD depends on the strategy. The safest way was to judge a pose as being stable if any one run among five parallel runs was stable under MD. In this case, 95% of the correct poses were retained under MD, and about 25-44% of the decoys could be excluded by the simulations for all cases. On the other hand, if we judge a pose as being stable when any two or three runs were stable, with the risk of incorrectly excluding some correct poses, approximately 31-53% or 39-56% of the two decoys could be excluded by MD, respectively. Our results suggest that simple equilibrium simulations can serve as an effective filter to exclude decoy poses that cannot be distinguished by docking scores from the computationally expensive free-energy calculations.
Numerical Analysis of Plasma Transport in Tandem Volume Magnetic Multicusp Ion Sources
1992-03-01
the results of the model are qualitatively correct. Boltzmann Equation, Ion Sources, Plasma Simulation, Electron Temperature, Plasma Density, Ion Temperature, Hydrogen Ions, Magnetic Filters, Hydrogen Plasma Chemistry .
Monte Carlo simulation of ion-neutral charge exchange collisions and grid erosion in an ion thruster
NASA Technical Reports Server (NTRS)
Peng, Xiaohang; Ruyten, Wilhelmus M.; Keefer, Dennis
1991-01-01
A combined particle-in-cell (PIC)/Monte Carlo simulation model has been developed in which the PIC method is used to simulate the charge exchange collisions. It is noted that a number of features were reproduced correctly by this code, but that its assumption of two-dimensional axisymmetry for a single set of grid apertures precluded the reproduction of the most characteristic feature of actual test data; namely, the concentrated grid erosion at the geometric center of the hexagonal aperture array. The first results of a three-dimensional code, which takes into account the hexagonal symmetry of the grid, are presented. It is shown that, with this code, the experimentally observed erosion patterns are reproduced correctly, demonstrating explicitly the concentration of sputtering between apertures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.
Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulatemore » solutions of bovine serum albumin and of hen egg white lysozyme.« less
The Interplay of Opacities and Rotation in Promoting the Explosion of Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Vartanyan, David; Burrows, Adam; Radice, David
2018-01-01
For over five decades, the mechanism of explosion in core-collapse supernovae has been a central unsolved problem in astrophysics, challenging both our computational capabilities and our understanding of relevant physics. Current simulations often produce explosions, but they are at times underenergetic. The neutrino mechanism, wherein a fraction of emitted neutrinos is absorbed in the mantle of the star to reignite the stalled shock, remains the dominant model for reviving explosions in massive stars undergoing core collapse. We present here a diverse suite of 2D axisymmetric simulations produced by FORNAX, a highly parallelizable multidimensional supernova simulation code. We explore the effects of various corrections, including the many-body correction, to neutrino-matter opacities and the possible role of rotation in promoting explosion amongst various core-collapse progenitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, L; Lin, A; Ahn, P
Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
NASA Astrophysics Data System (ADS)
Liu, J.; Lu, W. Q.
2010-03-01
This paper presents the detailed MD simulation on the properties including the thermal conductivities and viscosities of the quantum fluid helium at different state points. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach and the properties are calculated using the Green-Kubo equations. A comparison is made among the numerical results using LJ and QFH potentials and the existing database and shows that the LJ model is not quantitatively correct for the supercritical liquid helium, thereby the quantum effect must be taken into account when the quantum fluid helium is studied. The comparison of the thermal conductivity is also made as a function of temperatures and pressure and the results show quantum effect correction is an efficient tool to get the thermal conductivities.
Francescon, P; Kilby, W; Noll, J M; Masi, L; Satariano, N; Russo, S
2017-02-07
Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between -6.1% and -3.5% for the diode models evaluated, while in a 7.6 mm × 7.7 mm MLC field these are -4.5% to -1.8%. The corresponding microchamber corrections are +9.9% to +10.7% and +3.5% to +4.0%. The microdiamond corrections have a maximum of -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are <1% in all beams. Measured OF showed uncorrected inter-detector differences >15%, reducing to <3% after correction. PDD corrections at d > d max were <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.
NASA Astrophysics Data System (ADS)
Francescon, P.; Kilby, W.; Noll, J. M.; Masi, L.; Satariano, N.; Russo, S.
2017-02-01
Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between -6.1% and -3.5% for the diode models evaluated, while in a 7.6 mm × 7.7 mm MLC field these are -4.5% to -1.8%. The corresponding microchamber corrections are +9.9% to +10.7% and +3.5% to +4.0%. The microdiamond corrections have a maximum of -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are <1% in all beams. Measured OF showed uncorrected inter-detector differences >15%, reducing to <3% after correction. PDD corrections at d > d max were <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.
Quantum Corrections to the 'Atomistic' MOSFET Simulations
NASA Technical Reports Server (NTRS)
Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.
2000-01-01
We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.
Carnegie, Nicole Bohme
2011-04-15
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...
Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.
2018-01-01
The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.
Dynamic simulation of Static Var Compensators in distribution systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koessler, R.J.
1992-08-01
This paper is a system study guide for the correction of voltage dips due to large motor startups with Static Var Compensators (SVCs). The method utilizes time simulations, which are an important aid in the equipment design and specification. The paper illustrates the process of setting-up a computer model and performing time simulations. The study process is demonstrated through an example, the Shawnee feeder in the Niagara Mohawk Power Corporation service area.
Reply to "Comment on `Simple improvements to classical bubble nucleation models'"
NASA Astrophysics Data System (ADS)
Tanaka, Kyoko K.; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg
2016-08-01
We reply to the Comment by Schmelzer and Baidakov [Phys. Rev. E 94, 026801 (2016)]., 10.1103/PhysRevE.94.026801 They suggest that a more modern approach than the classic description by Tolman is necessary to model the surface tension of curved interfaces. Therefore we now consider the higher-order Helfrich correction, rather than the simpler first-order Tolman correction. Using a recent parametrization of the Helfrich correction provided by Wilhelmsen et al. [J. Chem. Phys. 142, 064706 (2015)], 10.1063/1.4907588, we test this description against measurements from our simulations, and find an agreement stronger than what the pure Tolman description offers. Our analyses suggest a necessary correction of order higher than the second for small bubbles with radius ≲1 nm. In addition, we respond to other minor criticism about our results.
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.
Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4more » km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.« less
Towards improving software security by using simulation to inform requirements and conceptual design
Nutaro, James J.; Allgood, Glenn O.; Kuruganti, Teja
2015-06-17
We illustrate the use of modeling and simulation early in the system life-cycle to improve security and reduce costs. The models that we develop for this illustration are inspired by problems in reliability analysis and supervisory control, for which similar models are used to quantify failure probabilities and rates. In the context of security, we propose that models of this general type can be used to understand trades between risk and cost while writing system requirements and during conceptual design, and thereby significantly reduce the need for expensive security corrections after a system enters operation
Sensitivity of atmospheric correction to loading and model of the aerosol
NASA Astrophysics Data System (ADS)
Bassani, Cristiana; Braga, Federica; Bresciani, Mariano; Giardino, Claudia; Adamo, Maria; Ananasso, Cristina; Alberotanza, Luigi
2013-04-01
The physically-based atmospheric correction requires knowledge of the atmospheric conditions during the remotely data acquisitions [Guanter et al., 2007; Gao et al., 2009; Kotchenova et al. 2009; Bassani et al., 2010]. The propagation of solar radiation in the atmospheric window of visible and near-infrared spectral domain, depends on the aerosol scattering. The effects of solar beam extinction are related to the aerosol loading, by the aerosol optical thickness @550nm (AOT) parameter [Kaufman et al., 1997; Vermote et al., 1997; Kotchenova et al., 2008; Kokhanovsky et al. 2010], and also to the aerosol model. Recently, the atmospheric correction of hyperspectral data is considered sensitive to the micro-physical and optical characteristics of aerosol, as reported in [Bassani et al., 2012]. Within the framework of CLAM-PHYM (Coasts and Lake Assessment and Monitoring by PRISMA HYperspectral Mission) project, funded by Italian Space Agency (ASI), the role of the aerosol model on the accuracy of the atmospheric correction of hyperspectral image acquired over water target is investigated. In this work, the results of the atmospheric correction of HICO (Hyperspectral Imager for the Coastal Ocean) images acquired on Northern Adriatic Sea in the Mediterranean are presented. The atmospheric correction has been performed by an algorithm specifically developed for HICO sensor. The algorithm is based on the equation presented in [Vermote et al., 1997; Bassani et al., 2010] by using the last generation of the Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code [Kotchenova et al., 2008; Vermote et al., 2009]. The sensitive analysis of the atmospheric correction of HICO data is performed with respect to the aerosol optical and micro-physical properties used to define the aerosol model. In particular, a variable mixture of the four basic components: dust- like, oceanic, water-soluble, and soot, has been considered. The water reflectance, obtained from the atmospheric correction with variable model and fixed loading of the aerosol, has been compared. The results highlight the requirements to define the aerosol characteristics, loading and model, to simulate the radiative field in the atmosphere system for an accurate atmospheric correction of hyperspectral data, improving the accuracy of the results for surface reflectance process over water, a dark-target. As conclusion, the aerosol model plays a crucial role for an accurate physically-based atmospheric correction of hyperspectral data over water. Currently, the PRISMA mission provides valuable opportunities to study aerosol and their radiative effects on the hyperspectral data. Bibliography Guanter, L.; Estellès, V.; Moreno, J. Spectral calibration and atmospheric correction of ultra-fine spectral and spatial resolution remote sensing data. Application to CASI-1500 data. Remote Sens. Environ. 2007, 109, 54-65. Gao, B.-C.; Montes, M.J.; Davis, C.O.; Goetz, A.F.H. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sens. Environ. 2009, 113, S17-S24. Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23. Bassani C.; Cavalli, R.M.; Pignatti S. Aerosol optical retrieval and surface reflectance from airborne remote sensing data over land. Sens. 2010, 10, 6421-6438. Kaufman, Y. J., Tanrè, D., Gordon H. R., Nakajima T., Lenoble J., Frouin R., Grassl H., Herman B.M., King M., and Teillet P.M.: Operational remote sensing of tropospheric aerosol over land from EOS moderate resolution imaging spectroradiometer, J. Geophys. Res., 102(D14), 17051-17067, 1997. Vermote, E.F.; Tanrè , D.; Deuzè´ , J.L.; Herman M.; Morcrette J.J. Second simulation of the satellite signal in the solar spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675-686. Kotchenova, S.Y.; Vermote, E.F.; Levy, R.; Lyapustin, A. Radiative transfer codes for atmospheric correction and aerosol retrieval: Intercomparison study. Appl. Optics 2008, 47, 2215-2226. Kokhanovsky A.A., Deuzè J.L., Diner D.J., Dubovik O., Ducos F., Emde C., Garay M.J., Grainger R.G., Heckel A., Herman M., Katsev I.L., Keller J., Levy R., North P.R.J., Prikhach A.S., Rozanov V.V., Sayer A.M., Ota Y., Tanrè D., Thomas G.E., Zege E.P. The inter-comparison of major satellite aerosol retrieval algorithms using simulated intensity and polarization characteristics of reflected light. Atmos. Meas. Tech., 3, 909-932, 2010. Bassani C.; Cavalli, R.M.; Antonelli, P. Influence of aerosol and surface reflectance variability on hyperspectral observed radiance. Atmos. Meas. Tech. 2012, 5, 1193-1203. Vermote , E.F.; Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23.
Revisions to some parameters used in stochastic-method simulations of ground motion
Boore, David; Thompson, Eric M.
2015-01-01
The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.
Attribution of Extreme Rainfall Events in the South of France Using EURO-CORDEX Simulations
NASA Astrophysics Data System (ADS)
Luu, L. N.; Vautard, R.; Yiou, P.
2017-12-01
The Mediterranean region regularly undergoes episodes of intense precipitation in the fall season that exceed 300mm a day. This study focuses on the role of climate change on the dynamics of the events that occur in the South of France. We used an ensemble of 10 EURO-CORDEX model simulations with two horizontal resolutions (EUR-11: 0.11° and EUR-44: 0.44°) for the attribution of extreme rainfall in the fall in the Cevennes mountain range (South of France). The biases of the simulations were corrected with simple scaling adjustment and a quantile correction (CDFt). This produces five datasets including EUR-44 and EUR-11 with and without scaling adjustment and CDFt-EUR-11, on which we test the impact of resolution and bias correction on the extremes. Those datasets, after pooling all of models together, are fitted by a stationary Generalized Extreme Value distribution for several periods to estimate a climate change signal in the tail of distribution of extreme rainfall in the Cévenne region. Those changes are then interpreted by a scaling model that links extreme rainfall with mean and maximum daily temperature. The results show that higher-resolution simulations with bias adjustment provide a robust and confident increase of intensity and likelihood of occurrence of autumn extreme rainfall in the area in current climate in comparison with historical climate. The probability (exceedance probability) of 1-in-1000-year event in historical climate may increase by a factor of 1.8 under current climate with a confident interval of 0.4 to 5.3 following the CDFt bias-adjusted EUR-11. The change of magnitude appears to follow the Clausius-Clapeyron relation that indicates a 7% increase in rainfall per 1oC increase in temperature.
A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion
NASA Astrophysics Data System (ADS)
Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.
2016-07-01
A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.
Two-compartment modeling of tissue microcirculation revisited.
Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen
2017-05-01
Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.
Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?
NASA Astrophysics Data System (ADS)
Nijssen, B.; Chegwidden, O.
2017-12-01
Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.
NASA Technical Reports Server (NTRS)
Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.
2007-01-01
This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran
2014-04-23
The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
Reynolds-Averaged Navier-Stokes Simulation of a 2D Circulation Control Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Jones, Greg; Lin, John C.
2011-01-01
Numerical simulations are performed using a Reynolds-averaged Navier-Stokes (RANS) flow solver for a circulation control airfoil. 2D and 3D simulation results are compared to a circulation control wind tunnel test conducted at the NASA Langley Basic Aerodynamics Research Tunnel (BART). The RANS simulations are compared to a low blowing case with a jet momentum coefficient, C(sub u), of 0:047 and a higher blowing case of 0.115. Three dimensional simulations of the model and tunnel walls show wall effects on the lift and airfoil surface pressures. These wall effects include a 4% decrease of the midspan sectional lift for the C(sub u) 0.115 blowing condition. Simulations comparing the performance of the Spalart Allmaras (SA) and Shear Stress Transport (SST) turbulence models are also made, showing the SST model compares best to the experimental data. A Rotational/Curvature Correction (RCC) to the turbulence model is also evaluated demonstrating an improvement in the CFD predictions.
Numerical simulations of crystal growth in a transdermal drug delivery system
NASA Astrophysics Data System (ADS)
Zeng, Jianming; Jacob, Karl I.; Tikare, Veena
2004-02-01
Grain growth by precipitation and Ostwald ripening in an unstressed matrix of a dissolved crystallizable component was simulated using a kinetic Monte Carlo model. This model was used previously to study Ostwald ripening in the high crystallizable component regime and was shown to correctly simulate solution, diffusion and precipitation. In this study, the same model with modifications was applied to the low crystallizable regime of interest to the transdermal drug delivery system (TDS) community. We demonstrate the model's utility by simulating precipitation and grain growth during isothermal storage at different supersaturation conditions. The simulation results provide a first approximation for the crystallization occurring in TDS. It has been reported that for relatively higher temperature growth of drug crystals in TDS occurs only in the middle third of the polymer layer. The results from the simulations support these findings that crystal growth is limited to the middle third of the region, where the availability of crystallizable components is the highest, for cluster growth at relatively high temperature.
Reliability of analog quantum simulation
Sarovar, Mohan; Zhang, Jun; Zeng, Lishan
2017-01-03
Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these twomore » points of view is a critical step in making the most of this promising technology. In this paper we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. Finally, to demonstrate the approach, we characterize the robust features of a variety of quantum many-body models.« less
Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.
Research on simulation of supercritical steam turbine system in large thermal power station
NASA Astrophysics Data System (ADS)
Zhou, Qiongyang
2018-04-01
In order to improve the stability and safety of supercritical steam turbine system operation in large thermal power station, the body of the steam turbine is modeled in this paper. And in accordance with the hierarchical modeling idea, the steam turbine body model, condensing system model, deaeration system model and regenerative system model are combined to build a simulation model of steam turbine system according to the connection relationship of each subsystem of steam turbine. Finally, the correctness of the model is verified by design and operation data of the 600MW supercritical unit. The results show that the maximum simulation error of the model is 2.15%, which meets the requirements of the engineering. This research provides a platform for the research on the variable operating conditions of the turbine system, and lays a foundation for the construction of the whole plant model of the thermal power plant.
Some issues in the simulation of two-phase flows: The relative velocity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräbel, J.; Hensel, S.; Ueberholz, P.
In this paper we compare numerical approximations for solving the Riemann problem for a hyperbolic two-phase flow model in two-dimensional space. The model is based on mixture parameters of state where the relative velocity between the two-phase systems is taken into account. This relative velocity appears as a main discontinuous flow variable through the complete wave structure and cannot be recovered correctly by some numerical techniques when simulating the associated Riemann problem. Simulations are validated by comparing the results of the numerical calculation qualitatively with OpenFOAM software. Simulations also indicate that OpenFOAM is unable to resolve the relative velocity associatedmore » with the Riemann problem.« less
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Simulation of modern climate with the new version of the INM RAS climate model
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.
2017-03-01
The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.
Global Magnetosphere Modeling With Kinetic Treatment of Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Toth, G.; Chen, Y.; Gombosi, T. I.; Cassak, P.; Markidis, S.; Peng, B.; Henderson, M. G.
2017-12-01
Global magnetosphere simulations with a kinetic treatment of magnetic reconnection are very challenging because of the large separation of global and kinetic scales. We have developed two algorithms that can overcome these difficulties: 1) the two-way coupling of the global magnetohydrodynamic code with an embedded particle-in-cell model (MHD-EPIC) and 2) the artificial increase of the ion and electron kinetic scales. Both of these techniques improve the efficiency of the simulations by many orders of magnitude. We will describe the techniques and show that they provide correct and meaningful results. Using the coupled model and the increased kinetic scales, we will present global magnetosphere simulations with the PIC domains covering the dayside and/or tail reconnection sites. The simulation results will be compared to and validated with MMS observations.
Conceptualization of preferential flow for hillslope stability assessment
NASA Astrophysics Data System (ADS)
Kukemilks, Karlis; Wagner, Jean-Frank; Saks, Tomas; Brunner, Philip
2018-03-01
This study uses two approaches to conceptualize preferential flow with the goal to investigate their influence on hillslope stability. Synthetic three-dimensional hydrogeological models using dual-permeability and discrete-fracture conceptualization were subsequently integrated into slope stability simulations. The slope stability simulations reveal significant differences in slope stability depending on the preferential flow conceptualization applied, despite similar small-scale hydrogeological responses of the system. This can be explained by a local-scale increase of pore-water pressures observed in the scenario with discrete fractures. The study illustrates the critical importance of correctly conceptualizing preferential flow for slope stability simulations. It further demonstrates that the combination of the latest generation of physically based hydrogeological models with slope stability simulations allows for improvement to current modeling approaches through more complex consideration of preferential flow paths.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Observer-based monitoring of heat exchangers.
Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente
2008-01-01
The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.
Dong, Bing; Li, Yan; Han, Xin-li; Hu, Bin
2016-01-01
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10−5 in optimized correction and is 1.427 × 10−5 in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method. PMID:27598161
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji
2012-01-01
Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfallârunoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...
ERIC Educational Resources Information Center
Leth-Steensen, Craig; Gallitto, Elena
2016-01-01
A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Malkyarenko, Dariya I; Chenevert, Thomas L
2014-12-01
To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.
A simulation of GPS and differential GPS sensors
NASA Technical Reports Server (NTRS)
Rankin, James M.
1993-01-01
The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.
NASA Astrophysics Data System (ADS)
Kamikubo, Takashi; Ohnishi, Takayuki; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi; Bai, Shufeng; Wang, Jen-Shiang; Howell, Rafael; Chen, George; Li, Jiangwei; Tao, Jun; Wiley, Jim; Kurosawa, Terunobu; Saito, Yasuko; Takigawa, Tadahiro
2010-09-01
In electron beam writing on EUV mask, it has been reported that CD linearity does not show simple signatures as observed with conventional COG (Cr on Glass) masks because they are caused by scattered electrons form EUV mask itself which comprises stacked heavy metals and thick multi-layers. To resolve this issue, Mask Process Correction (MPC) will be ideally applicable. Every pattern is reshaped in MPC. Therefore, the number of shots would not increase and writing time will be kept within reasonable range. In this paper, MPC is extended to modeling for correction of CD linearity errors on EUV mask. And its effectiveness is verified with simulations and experiments through actual writing test.
A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing
NASA Technical Reports Server (NTRS)
Bellan, Josette; Taskinoglu, Ezgi
2012-01-01
Currently, the modeling of supercritical-pressure flows through Large Eddy Simulation (LES) uses models derived for atmospheric-pressure flows. Those atmospheric-pressure flows do not exhibit the particularities of high densitygradient magnitude features observed both in experiments and simulations of supercritical-pressure flows in the case of two species mixing. To assess whether the current LES modeling is appropriate and if found not appropriate to propose higher-fidelity models, a LES a posteriori study has been conducted for a mixing layer that initially contains different species in the lower and upper streams, and where the initial pressure is larger than the critical pressure of either species. An initially-imposed vorticity perturbation promotes roll-up and a double pairing of four initial span-wise vortices into an ultimate vortex that reaches a transitional state. The LES equations consist of the differential conservation equations coupled with a real-gas equation of state, and the equation set uses transport properties depending on the thermodynamic variables. Unlike all LES models to date, the differential equations contain, additional to the subgrid scale (SGS) fluxes, a new SGS term that is a pressure correction in the momentum equation. This additional term results from filtering of Direct Numerical Simulation (DNS) equations, and represents the gradient of the difference between the filtered pressure and the pressure computed from the filtered flow field. A previous a priori analysis, using a DNS database for the same configuration, found this term to be of leading order in the momentum equation, a fact traced to the existence of high-densitygradient magnitude regions that populated the entire flow; in the study, models were proposed for the SGS fluxes as well as this new term. In the present study, the previously proposed constantcoefficient SGS-flux models of the a priori investigation are tested a posteriori in LES, devoid of or including, the SGS pressure correction term. The present pressure-correction model is different from, and more accurate as well as less computationally intensive than that of the a priori study. The constant-coefficient SGS-flux models encompass the Smagorinsky (SMC), in conjunction with the Yoshizawa (YO) model for the trace, the Gradient (GRC) and the Scale Similarity (SSC) models, all exercised with the a priori study constant coefficients calibrated at the transitional state. The LES comparison is performed with the filtered- and-coarsened (FC) DNS, which represents an ideal LES solution. Expectably, an LES model devoid of SGS terms is shown to be considerably inferior to models containing SGS effects. Among models containing SGS effects, those including the pressure-correction term are substantially superior to those devoid of it. The sensitivity of the predictions to the initial conditions and grid size are also investigated. Thus, it has been discovered that, additional to the atmospheric-pressure models currently used, a new model is necessary to simulate supercritical-pressure flows. This model depends on the thermodynamic characteristics of the chemical species involved.
NASA Astrophysics Data System (ADS)
Zhang, Mingyang
2018-06-01
To further study the bidirectional flow problem of V2G (Vehicle to Grid) charge and discharge motor, the mathematical model of AC/DC converter and bi-directional DC/DC converter was established. Then, lithium battery was chosen as the battery of electric vehicle and its mathematical model was established. In order to improve the service life of lithium battery, bidirectional DC/DC converter adopted constant current and constant voltage control strategy. In the initial stage of charging, constant current charging was adopted with current single closed loop control. After reaching a certain value, voltage was switched to constant voltage charging controlled by voltage and current. Subsequently, the V2G system simulation model was built in MATLAB/Simulink. The simulation results verified the correctness of the control strategy and showed that when charging, constant current and constant voltage charging was achieved, the grid side voltage and current were in the same phase, and the power factor was about 1. When discharging, the constant current discharge was applied, and the grid voltage and current phase difference was r. To sum up, the simulation results are correct and helpful.
One-way coupling of an atmospheric and a hydrologic model in Colorado
Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.
2006-01-01
This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.
NASA Astrophysics Data System (ADS)
Ooi, Seng-Keat
2005-11-01
Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
NASA Astrophysics Data System (ADS)
Curci, Gabriele; Falasca, Serena
2017-04-01
Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.
Stimulation artifact correction method for estimation of early cortico-cortical evoked potentials.
Trebaul, Lena; Rudrauf, David; Job, Anne-Sophie; Mălîia, Mihai Dragos; Popa, Irina; Barborica, Andrei; Minotti, Lorella; Mîndruţă, Ioana; Kahane, Philippe; David, Olivier
2016-05-01
Effective connectivity can be explored using direct electrical stimulations in patients suffering from drug-resistant focal epilepsies and investigated with intracranial electrodes. Responses to brief electrical pulses mimic the physiological propagation of signals and manifest as cortico-cortical evoked potentials (CCEP). The first CCEP component is believed to reflect direct connectivity with the stimulated region but the stimulation artifact, a sharp deflection occurring during a few milliseconds, frequently contaminates it. In order to recover the characteristics of early CCEP responses, we developed an artifact correction method based on electrical modeling of the electrode-tissue interface. The biophysically motivated artifact templates are then regressed out of the recorded data as in any classical template-matching removal artifact methods. Our approach is able to make the distinction between the physiological responses time-locked to the stimulation pulses and the non-physiological component. We tested the correction on simulated CCEP data in order to quantify its efficiency for different stimulation and recording parameters. We demonstrated the efficiency of the new correction method on simulations of single trial recordings for early responses contaminated with the stimulation artifact. The results highlight the importance of sampling frequency for an accurate analysis of CCEP. We then applied the approach to experimental data. The model-based template removal was compared to a correction based on the subtraction of the averaged artifact. This new correction method of stimulation artifact will enable investigators to better analyze early CCEP components and infer direct effective connectivity in future CCEP studies. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.
2008-09-01
The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.
The Real-Time Wall Interference Correction System of the NASA Ames 12-Foot Pressure Wind Tunnel
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
1998-01-01
An improved version of the Wall Signature Method was developed to compute wall interference effects in three-dimensional subsonic wind tunnel testing of aircraft models in real-time. The method may be applied to a full-span or a semispan model. A simplified singularity representation of the aircraft model is used. Fuselage, support system, propulsion simulator, and separation wake volume blockage effects are represented by point sources and sinks. Lifting effects are represented by semi-infinite line doublets. The singularity representation of the test article is combined with the measurement of wind tunnel test reference conditions, wall pressure, lift force, thrust force, pitching moment, rolling moment, and pre-computed solutions of the subsonic potential equation to determine first order wall interference corrections. Second order wall interference corrections for pitching and rolling moment coefficient are also determined. A new procedure is presented that estimates a rolling moment coefficient correction for wings with non-symmetric lift distribution. Experimental data obtained during the calibration of the Ames Bipod model support system and during tests of two semispan models mounted on an image plane in the NASA Ames 12 ft. Pressure Wind Tunnel are used to demonstrate the application of the wall interference correction method.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
NASA Technical Reports Server (NTRS)
Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.
2008-01-01
Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.
Predicting translational deformity following opening-wedge osteotomy for lower limb realignment.
Barksfield, Richard C; Monsell, Fergal P
2015-11-01
An opening-wedge osteotomy is well recognised for the management of limb deformity and requires an understanding of the principles of geometry. Translation at the osteotomy is needed when the osteotomy is performed away from the centre of rotation of angulation (CORA), but the amount of translation varies with the distance from the CORA. This translation enables proximal and distal axes on either side of the proposed osteotomy to realign. We have developed two experimental models to establish whether the amount of translation required (based on the translation deformity created) can be predicted based upon simple trigonometry. A predictive algorithm was derived where translational deformity was predicted as 2(tan α × d), where α represents 50 % of the desired angular correction, and d is the distance of the desired osteotomy site from the CORA. A simulated model was developed using TraumaCad online digital software suite (Brainlab AG, Germany). Osteotomies were simulated in the distal femur, proximal tibia and distal tibia for nine sets of lower limb scanograms at incremental distances from the CORA and the resulting translational deformity recorded. There was strong correlation between the distance of the osteotomy from the CORA and simulated translation deformity for distal femoral deformities (correlation coefficient 0.99, p < 0.0001), proximal tibial deformities (correlation coefficient 0.93-0.99, p < 0.0001) and distal tibial deformities (correlation coefficient 0.99, p < 0.0001). There was excellent agreement between the predictive algorithm and simulated translational deformity for all nine simulations (correlation coefficient 0.93-0.99, p < 0.0001). Translational deformity following corrective osteotomy for lower limb deformity can be anticipated and predicted based upon the angular correction and the distance between the planned osteotomy site and the CORA.
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.
NASA Astrophysics Data System (ADS)
Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao
2017-02-01
Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.
Modelling Black Carbon concentrations in two busy street canyons in Brussels using CANSBC
NASA Astrophysics Data System (ADS)
Brasseur, O.; Declerck, P.; Heene, B.; Vanderstraeten, P.
2015-01-01
This paper focused on modelling Black Carbon (BC) concentrations in two busy street canyons, the Crown and Belliard Street in Brussels. The used original Operational Street Pollution Model was adapted to BC by eliminating the chemical module and is noted here as CANSBC. Model validations were performed using temporal BC data from the fixed measurement network in Brussels. Subsequently, BC emissions were adjusted so that simulated BC concentrations equalled the observed ones, averaged over the whole period of simulation. Direct validations were performed for the Crown Street, while BC model calculations for the Belliard Street were validated indirectly using the linear relationship between BC and NOx. Concerning the Crown Street, simulated and observed half-hourly BC concentrations correlated well (r = 0.74) for the period from July 1st, 2011 till June 30th, 2013. In particular, CANSBC performed very well to simulate the monthly and diurnal evolutions of averaged BC concentrations, as well as the difference between weekdays and weekends. This means that the model correctly handled the meteorological conditions as well as the variation in traffic emissions. Considering dispersion, it should however be noted that BC concentrations are better simulated under stable than under unstable conditions. Even if the correlation on half-hourly NOx concentrations was slightly lower (r = 0.60) than the one of BC, indirect validations of CANSBC for the Belliard Street yielded comparable results and conclusions as described above for the Crown Street. Based on our results, it can be stated that CANSBC is suitable to accurately simulate BC concentrations in the street canyons of Brussels, under the following conditions: (i) accurate vehicle counting data is available to correctly estimate traffic emissions, and (ii) vehicle speeds are measured in order to improve emission estimates and to take into account the impact of the turbulence generated by moving vehicles on the local dispersion of BC.
Experimental Flow Models for SSME Flowfield Characterization
NASA Technical Reports Server (NTRS)
Abel, L. C.; Ramsey, P. E.
1989-01-01
Full scale flow models with extensive instrumentation were designed and manufactured to provide data necessary for flow field characterization in rocket engines of the Space Shuttle Main Engine (SSME) type. These models include accurate flow path geometries from the pre-burner outlet through the throat of the main combustion chamber. The turbines are simulated with static models designed to provide the correct pressure drop and swirl for specific power levels. The correct turbopump-hot gas manifold interfaces were designed into the flow models to permit parametric/integration studies for new turbine designs. These experimental flow models provide a vehicle for understanding the fluid dynamics associated with specific engine issues and also fill the more general need for establishing a more detailed fluid dynamic base to support development and verification of advanced math models.
Modulation of Soil Initial State on WRF Model Performance Over China
NASA Astrophysics Data System (ADS)
Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun
2017-11-01
The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; Zhang, Lei; Kang, Qinjun
Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less
Chen, Li; Zhang, Lei; Kang, Qinjun; ...
2015-01-28
Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less
Chen, Li; Zhang, Lei; Kang, Qinjun; Viswanathan, Hari S.; Yao, Jun; Tao, Wenquan
2015-01-01
Porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsic permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. For the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed. PMID:25627247
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less
NASA Technical Reports Server (NTRS)
Hayne, G. S.; Hancock, D. W., III
1990-01-01
Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.
Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il
2014-08-14
We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less
A Caveat Note on Tuning in the Development of Coupled Climate Models
NASA Astrophysics Data System (ADS)
Dommenget, Dietmar; Rezny, Michael
2018-01-01
State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.
Impact of Adsorption on Gas Transport in Nanopores.
Wu, Tianhao; Zhang, Dongxiao
2016-03-29
Given the complex nature of the interaction between gas and solid atoms, the development of nanoscale science and technology has engendered a need for further understanding of gas transport behavior through nanopores and more tractable models for large-scale simulations. In the present paper, we utilize molecular dynamic simulations to demonstrate the behavior of gas flow under the influence of adsorption in nano-channels consisting of illite and graphene, respectively. The results indicate that velocity oscillation exists along the cross-section of the nano-channel, and the total mass flow could be either enhanced or reduced depending on variations in adsorption under different conditions. The mechanisms can be explained by the extra average perturbation stress arising from density oscillation via the novel perturbation model for micro-scale simulation, and approximated via the novel dual-region model for macro-scale simulation, which leads to a more accurate permeability correction model for industrial applications than is currently available.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
Moderate forest disturbance as a stringent test for gap and big-leaf models
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B.; Fisk, J.; Holm, J. A.; Bailey, V.; Gough, C. M.
2014-07-01
Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. In particular, it is unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models - Biome-BGC, a classic big-leaf model, and the ED and ZELIG gap-oriented models - could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experiment in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols, and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ED and ZELIG correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes. Biome-BGC net primary production (NPP) was correctly resilient, but for the wrong reasons, while ED and ZELIG exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. As a result we expect that most ecosystem models, developed to simulate processes following stand-replacing disturbances, will not simulate well the gradual and less extensive tree mortality characteristic of moderate disturbances.
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
The NASA Lewis integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1991-01-01
A new flight simulation facility was developed at NASA-Lewis. The purpose of this flight simulator is to allow integrated propulsion control and flight control algorithm development and evaluation in real time. As a preliminary check of the simulator facility capabilities and correct integration of its components, the control design and physics models for a short take-off and vertical landing fighter aircraft model were shown, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The initial testing and evaluation results show that this fixed based flight simulator can provide real time feedback and display of both airframe and propulsion variables for validation of integrated flight and propulsion control systems. Additionally, through the use of this flight simulator, various control design methodologies and cockpit mechanizations can be tested and evaluated in a real time environment.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
NASA Astrophysics Data System (ADS)
Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.
2014-12-01
We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.
Influence of wave-front sampling in adaptive optics retinal imaging
Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K.; Pircher, Michael
2017-01-01
A wide range of sampling densities of the wave-front has been used in retinal adaptive optics (AO) instruments, compared to the number of corrector elements. We developed a model in order to characterize the link between number of actuators, number of wave-front sampling points and AO correction performance. Based on available data from aberration measurements in the human eye, 1000 wave-fronts were generated for the simulations. The AO correction performance in the presence of these representative aberrations was simulated for different deformable mirror and Shack Hartmann wave-front sensor combinations. Predictions of the model were experimentally tested through in vivo measurements in 10 eyes including retinal imaging with an AO scanning laser ophthalmoscope. According to our study, a ratio between wavefront sampling points and actuator elements of 2 is sufficient to achieve high resolution in vivo images of photoreceptors. PMID:28271004
NASA Technical Reports Server (NTRS)
Badavi, F. F.
1989-01-01
Aerodynamic loads on a multi-bladed helicopter rotor in forward flight at transonic tip conditions are calculated. The unsteady, three-dimensional, time-accurate compressible Reynolds-averaged thin layer Navier-Stokes equations are solved in a rotating coordinate system on a body-conformed, curvilinear grid of C-H topology. Detailed boundary layer and global numerical comparisons of NACA-0012 symmetrical and CAST7-158 supercritical airfoils are made under identical forward flight conditions. The rotor wake effects are modeled by applying a correction to the geometric angle of attack of the blade. This correction is obtained by computing the local induced downwash velocity with a free wake analysis program. The calculations are performed on the Numerical Aerodynamic Simulation Cray 2 and the VPS32 (a derivative of a Cyber 205 at the Langley Research Center) for a model helicopter rotor in forward flight.
SiC-VJFETs power switching devices: an improved model and parameter optimization technique
NASA Astrophysics Data System (ADS)
Ben Salah, T.; Lahbib, Y.; Morel, H.
2009-12-01
Silicon carbide junction field effect transistor (SiC-JFETs) is a mature power switch newly applied in several industrial applications. SiC-JFETs are often simulated by Spice model in order to predict their electrical behaviour. Although such a model provides sufficient accuracy for some applications, this paper shows that it presents serious shortcomings in terms of the neglect of the body diode model, among many others in circuit model topology. Simulation correction is then mandatory and a new model should be proposed. Moreover, this paper gives an enhanced model based on experimental dc and ac data. New devices are added to the conventional circuit model giving accurate static and dynamic behaviour, an effect not accounted in the Spice model. The improved model is implemented into VHDL-AMS language and steady-state dynamic and transient responses are simulated for many SiC-VJFETs samples. Very simple and reliable optimization algorithm based on the optimization of a cost function is proposed to extract the JFET model parameters. The obtained parameters are verified by comparing errors between simulations results and experimental data.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
Brief communication: Improved simulation of the present-day Greenland firn layer (1960-2016)
NASA Astrophysics Data System (ADS)
Ligtenberg, Stefan R. M.; Kuipers Munneke, Peter; Noël, Brice P. Y.; van den Broeke, Michiel R.
2018-05-01
By providing pore space for storage or refreezing of meltwater, the Greenland ice sheet firn layer strongly modulates runoff. Correctly representing the firn layer is therefore crucial for Greenland (surface) mass balance studies. Here, we present a simulation of the Greenland firn layer with the firn model IMAU-FDM forced by the latest output of the regional climate model RACMO2, version 2.3p2. In the percolation zone, much improved agreement is found with firn density and temperature observations. A full simulation of Greenland firn at high temporal (10 days) and spatial (11 km) resolution is available for the period 1960-2016.
Hachem, Bahe; Aubin, Carl-Eric; Parent, Stefan
2017-06-01
Developing fusionless devices to treat pediatric scoliosis necessitates lengthy and expensive animal trials. The objective was to develop and validate a porcine spine numerical model as an alternative platform to assess fusionless devices. A parametric finite element model (FEM) of an osseoligamentous porcine spine and rib cage, including the epiphyseal growth plates, was developed. A follower-type load replicated physiological and gravitational loads. Vertebral growth and its modulation were programmed based on the Hueter-Volkmann principle, stipulating growth reduction/promotion due to increased compressive/tensile stresses. Scoliosis induction via a posterior tether and 5-level rib tethering, was simulated over 10 weeks along with its subsequent correction via a contralateral anterior custom tether (20 weeks). Scoliosis induction was also simulated using two experimentally tested compression-based fusionless implants (hemi- and rigid staples) over 12- and 8-weeks growth, respectively. Resulting simulated Cobb and sagittal angles, apical vertebral wedging, and left/right height alterations were compared to reported studies. Simulated induced Cobb and vertebral wedging were 48.4° and 7.6° and corrected to 21° and 5.4°, respectively, with the contralateral anterior tether. Apical rotation (15.6°) was corrected to 7.4°. With the hemi- and rigid staples, Cobb angle was 11.2° and 11.8°, respectively, with 3.7° and 2.0° vertebral wedging. Sagittal plane was within the published range. Convex/concave-side vertebral height difference was 3.1 mm with the induction posterior tether and reduced to 2.3 with the contralateral anterior tether, with 1.4 and 0.8 for the hemi- and rigid staples. The FEM represented growth-restraining effects and growth modulation with Cobb and vertebral wedging within 0.6° and 1.9° of experimental animal results, while it was within 5° for the two simulated staples. Ultimately, the model would serve as a time- and cost-effective tool to assess the biomechanics and long-term effect of compression-based fusionless devices prior to animal trials, assisting the transfer towards treating scoliosis in the growing spine.
Moderate forest disturbance as a stringent test for gap and big-leaf models
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B.; Fisk, J. P.; Holm, J. A.; Bailey, V.; Bohrer, G.; Gough, C. M.
2015-01-01
Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models - Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models - could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experiment in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.
Moderate forest disturbance as a stringent test for gap and big-leaf models
Bond-Lamberty, Benjamin; Fisk, Justin P.; Holm, Jennifer; ...
2015-01-27
Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models – Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models – could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experimentmore » in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.« less
Impact of lakes and wetlands on present and future boreal climate
NASA Astrophysics Data System (ADS)
Poutou, E.; Krinner, G.; Genthon, C.
2002-12-01
Impact of lakes and wetlands on present and future boreal climate The role of lakes and wetlands in present-day high latitude climate is quantified using a general circulation model of the atmosphere. The atmospheric model includes a lake module which is presented and validated. Seasonal and spatial wetland distribution is calculated as a function of the hydrological budget of the wetlands themselves and of continental soil whose runoff feeds them. Wetland extent is simulated and discussed both in simulations forced by observed climate and in general circulation model simulations. In off-line simulations, forced by ECMWF reanalyses, the lake model simulates correctly observed lake ice durations, while the wetland extent is somewhat underestimated in the boreal regions. Coupled to the general circulation model, the lake model yields satisfying ice durations, although the climate model biases have impacts on the modeled lake ice conditions. Boreal wetland extents are overestimated in the general circulation model as simulated precipitation is too high. The impact of inundated surfaces on the simulated climate is strongest in summer when these surfaces are ice-free. Wetlands seem to play a more important role than lakes in cooling the boreal regions in summer and in humidifying the atmosphere. The role of lakes and wetlands in future climate change is evaluated by analyzing simulations of present and future climate with and without prescribed inland water bodies.
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Performance prediction using geostatistics and window reservoir simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.
1995-11-01
This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less
Simulation of Atmospheric-Entry Capsules in the Subsonic Regime
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Childs, Robert E.; Garcia, Joseph A.
2015-01-01
The accuracy of Computational Fluid Dynamics predictions of subsonic capsule aerodynamics is examined by comparison against recent NASA wind-tunnel data at high-Reynolds-number flight conditions. Several aspects of numerical and physical modeling are considered, including inviscid numerical scheme, mesh adaptation, rough-wall modeling, rotation and curvature corrections for eddy-viscosity models, and Detached-Eddy Simulations of the unsteady wake. All of these are considered in isolation against relevant data where possible. The results indicate that an improved predictive capability is developed by considering physics-based approaches and validating the results against flight-relevant experimental data.
Three-dimensional ray tracing for refractive correction of human eye ametropies
NASA Astrophysics Data System (ADS)
Jimenez-Hernandez, J. A.; Diaz-Gonzalez, G.; Trujillo-Romero, F.; Iturbe-Castillo, M. D.; Juarez-Salazar, R.; Santiago-Alvarado, A.
2016-09-01
Ametropies of the human eye, are refractive defects hampering the correct imaging on the retina. The most common ways to correct them is by means of spectacles, contact lenses, and modern methods as laser surgery. However, in any case it is very important to identify the ametropia grade for designing the optimum correction action. In the case of laser surgery, it is necessary to define a new shape of the cornea in order to obtain the wanted refractive correction. Therefore, a computational tool to calculate the focal length of the optical system of the eye versus variations on its geometrical parameters is required. Additionally, a clear and understandable visualization of the evaluation process is desirable. In this work, a model of the human eye based on geometrical optics principles is presented. Simulations of light rays coming from a punctual source at six meter from the cornea are shown. We perform a ray-tracing in three dimensions in order to visualize the focusing regions and estimate the power of the optical system. The common parameters of ametropies can be easily modified and analyzed in the simulation by an intuitive graphic user interface.
RAYLEIGH–TAYLOR UNSTABLE FLAMES—FAST OR FASTER?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, E. P., E-mail: eph2001@columbia.edu
2015-04-20
Rayleigh–Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate bothmore » models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.« less
Rayleigh-Taylor Unstable Flames -- Fast or Faster?
NASA Astrophysics Data System (ADS)
Hicks, E. P.
2015-04-01
Rayleigh-Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate both models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.
The effect of shot noise on the start up of the fundamental and harmonics in free-electron lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freund, H. P.; Miner, W. H. Jr.; Giannessi, L.
2008-12-15
The problem of radiation start up in free-electron lasers (FELs) is important in the simulation of virtually all FEL configurations including oscillators and amplifiers in both seeded master oscillator power amplifier (MOPA) and self-amplified spontaneous emission (SASE) modes. Both oscillators and SASE FELs start up from spontaneous emission due to shot noise on the electron beam, which arises from the random fluctuations in the phase distribution of the electrons. The injected power in a MOPA is usually large enough to overwhelm the shot noise. However, this noise must be treated correctly in order to model the initial start up ofmore » the harmonics. In this paper, we discuss and compare two different shot noise models that are implemented in both one-dimensional wiggler-averaged (PERSEO) and non-wiggler-averaged (MEDUSA1D) simulation codes, and a three-dimensional non-wiggler-averaged (MEDUSA) formulation. These models are compared for examples describing both SASE and MOPA configurations in one dimension, in steady-state, and time-dependent simulations. Remarkable agreement is found between PERSEO and MEDUSA1D for the evolution of the fundamental and harmonics. In addition, three-dimensional correction factors have been included in the MEDUSA1D and PERSEO, which show reasonable agreement with MEDUSA for a sample MOPA in steady-state and time-dependent simulations.« less
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
NASA Technical Reports Server (NTRS)
Lee, Henry C.; Klopfer, Goetz H.; Onufer, Jeff T.
2011-01-01
Investigation of the non-uniform flow angularity effects on the Ares I DAC-1 in the Langley Unitary Plan Wind Tunnel are explored through simulations by OVERFLOW. Verification of the wind tunnel results are needed to ensure that the standard wind tunnel calibration procedures for large models are valid. The expectation is that the systematic error can be quantified, and thus be used to correct the wind tunnel data. The corrected wind tunnel data can then be used to quantify the CFD uncertainties.
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
Extra-dimensional models on the lattice
Knechtli, Francesco; Rinaldi, Enrico
2016-08-05
In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less
Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2000-01-01
Two numerical models to simulate an enhanced very early time electromagnetic (VETEM) prototype system that is used for buried-object detection and environmental problems are presented. In the first model, the transmitting and receiving loop antennas accurately analyzed using the method of moments (MoM), and then conjugate gradient (CG) methods with the fast Fourier transform (FFT) are utilized to investigate the scattering from buried conducting plates. In the second model, two magnetic dipoles are used to replace the transmitter and receiver. Both the theory and formulation are correct and the simulation results for the primary magnetic field and the reflected magnetic field are accurate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarovar, Mohan; Zhang, Jun; Zeng, Lishan
Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these twomore » points of view is a critical step in making the most of this promising technology. In this paper we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. Finally, to demonstrate the approach, we characterize the robust features of a variety of quantum many-body models.« less
Parallel Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences.
ERIC Educational Resources Information Center
McClelland, James L.; Kawamoto, Alan H.
This paper describes and illustrates a simulation model for the processing of grammatical elements in a sentence, focusing on one aspect of sentence comprehension: the assignment of the constituent elements of a sentence to the correct thematic case roles. The model addresses questions about sentence processing from a perspective very different…
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
Clouds and ocean-atmosphere interactions. Final report, September 15, 1992--September 14, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randall, D.A.; Jensen, T.G.
1995-10-01
Predictions of global change based on climate models are influencing both national and international policies on energy and the environment. Existing climate models show some skill in simulating the present climate, but suffer from many widely acknowledged deficiencies. Among the most serious problems is the need to apply ``flux corrections`` to prevent the models from drifting away from the observed climate in control runs that do not include external perturbing influences such as increased carbon dioxide (CO{sub 2}) concentrations. The flux corrections required to prevent climate drift are typically comparable in magnitude to the observed fluxes themselves. Although there canmore » be many contributing reasons for the climate drift problem, clouds and their effects on the surface energy budget are among the prime suspects. The authors have conducted a research program designed to investigate global air-sea interaction as it relates to the global warming problem, with special emphasis on the role of clouds. Their research includes model development efforts; application of models to simulation of present and future climates, with comparison to observations wherever possible; and vigorous participation in ongoing efforts to intercompare the present generation of atmospheric general circulation models.« less
Isolating Curvature Effects in Computing Wall-Bounded Turbulent Flows
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.
2001-01-01
The flow over the zero-pressure-gradient So-Mellor convex curved wall is simulated using the Navier-Stokes equations. An inviscid effective outer wall shape, undocumented in the experiment, is obtained by using an adjoint optimization method with the desired pressure distribution on the inner wall as the cost function. Using this wall shape with a Navier-Stokes method, the abilities of various turbulence models to simulate the effects of curvature without the complicating factor of streamwise pressure gradient can be evaluated. The one-equation Spalart-Allmaras turbulence model overpredicts eddy viscosity, and its boundary layer profiles are too full. A curvature-corrected version of this model improves results, which are sensitive to the choice of a particular constant. An explicit algebraic stress model does a reasonable job predicting this flow field. However, results can be slightly improved by modifying the assumption on anisotropy equilibrium in the model's derivation. The resulting curvature-corrected explicit algebraic stress model possesses no heuristic functions or additional constants. It lowers slightly the computed skin friction coefficient and the turbulent stress levels for this case (in better agreement with experiment), but the effect on computed velocity profiles is very small.
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Precipitation frequency analysis based on regional climate simulations in Central Alberta
NASA Astrophysics Data System (ADS)
Kuo, Chun-Chao; Gan, Thian Yew; Hanrahan, Janel L.
2014-03-01
A Regional Climate Model (RCM), MM5 (the Fifth Generation Pennsylvania State University/National Center for Atmospheric Research mesoscale model), is used to simulate summer precipitation in Central Alberta. MM5 was set up with a one-way, three-domain nested framework, with domain resolutions of 27, 9, and 3 km, respectively, and forced with ERA-Interim reanalysis data of ECMWF (European Centre for Medium-Range Weather Forecasts). The objective is to develop high resolution, grid-based Intensity-Duration-Frequency (IDF) curves based on the simulated annual maximums of precipitation (AMP) data for durations ranging from 15-min to 24-h. The performance of MM5 was assessed in terms of simulated rainfall intensity, precipitable water, and 2-m air temperature. Next, the grid-based IDF curves derived from MM5 were compared to IDF curves derived from six RCMs of the North American Regional Climate Change Assessment Program (NARCCAP) set up with 50-km grids, driven with NCEP-DOE (National Centers for Environmental Prediction-Department of Energy) Reanalysis II data, and regional IDF curves derived from observed rain gauge data (RG-IDF). The analyzed results indicate that 6-h simulated precipitable water and 2-m temperature agree well with the ERA-Interim reanalysis data. However, compared to RG-IDF curves, IDF curves based on simulated precipitation data of MM5 are overestimated especially for IDF curves of 2-year return period. In contract, IDF curves developed from NARCCAP data suffer from under-estimation and differ more from RG-IDF curves than the MM5 IDF curves. The over-estimation of IDF curves of MM5 was corrected by a quantile-based, bias correction method. By dynamically downscale the ERA-Interim and after bias correction, it is possible to develop IDF curves useful for regions with limited or no rain gauge data. This estimation process can be further extended to predict future grid-based IDF curves subjected to possible climate change impacts based on climate change projections of GCMs (general circulation models) of IPCC (Intergovernmental Panel on Climate Change).
The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device
1994-02-01
simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1993-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Systematic study of Reynolds stress closure models in the computations of plane channel flows
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1992-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model
NASA Astrophysics Data System (ADS)
Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.
2017-12-01
In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.
Marelle, Louis; Raut, Jean-Christophe; Law, Kathy S.; ...
2017-01-01
In this study, the WRF-Chem regional model is updated to improve simulated short-lived pollutants (e.g., aerosols, ozone) in the Arctic. Specifically, we include in WRF-Chem 3.5.1 (with SAPRC-99 gas-phase chemistry and MOSAIC aerosols) (1) a correction to the sedimentation of aerosols, (2) dimethyl sulfide (DMS) oceanic emissions and gas-phase chemistry, (3) an improved representation of the dry deposition of trace gases over seasonal snow, and (4) an UV-albedo dependence on snow and ice cover for photolysis calculations. We also (5) correct the representation of surface temperatures over melting ice in the Noah Land Surface Model and (6) couple and further test the recent KF-CuP (Kain–Fritsch +more » Cumulus Potential) cumulus parameterization that includes the effect of cumulus clouds on aerosols and trace gases. The updated model is used to perform quasi-hemispheric simulations of aerosols and ozone, which are evaluated against surface measurements of black carbon (BC), sulfate, and ozone as well as airborne measurements of BC in the Arctic. The updated model shows significant improvements in terms of seasonal aerosol cycles at the surface and root mean square errors (RMSEs) for surface ozone, aerosols, and BC aloft, compared to the base version of the model and to previous large-scale evaluations of WRF-Chem in the Arctic. These improvements are mostly due to the inclusion of cumulus effects on aerosols and trace gases in KF-CuP (improved RMSE for surface BC and BC profiles, surface sulfate, and surface ozone), the improved surface temperatures over sea ice (surface ozone, BC, and sulfate), and the updated trace gas deposition and UV albedo over snow and ice (improved RMSE and correlation for surface ozone). DMS emissions and chemistry improve surface sulfate at all Arctic sites except Zeppelin, and correcting aerosol sedimentation has little influence on aerosols except in the upper troposphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marelle, Louis; Raut, Jean-Christophe; Law, Kathy S.
In this study, the WRF-Chem regional model is updated to improve simulated short-lived pollutants (e.g., aerosols, ozone) in the Arctic. Specifically, we include in WRF-Chem 3.5.1 (with SAPRC-99 gas-phase chemistry and MOSAIC aerosols) (1) a correction to the sedimentation of aerosols, (2) dimethyl sulfide (DMS) oceanic emissions and gas-phase chemistry, (3) an improved representation of the dry deposition of trace gases over seasonal snow, and (4) an UV-albedo dependence on snow and ice cover for photolysis calculations. We also (5) correct the representation of surface temperatures over melting ice in the Noah Land Surface Model and (6) couple and further test the recent KF-CuP (Kain–Fritsch +more » Cumulus Potential) cumulus parameterization that includes the effect of cumulus clouds on aerosols and trace gases. The updated model is used to perform quasi-hemispheric simulations of aerosols and ozone, which are evaluated against surface measurements of black carbon (BC), sulfate, and ozone as well as airborne measurements of BC in the Arctic. The updated model shows significant improvements in terms of seasonal aerosol cycles at the surface and root mean square errors (RMSEs) for surface ozone, aerosols, and BC aloft, compared to the base version of the model and to previous large-scale evaluations of WRF-Chem in the Arctic. These improvements are mostly due to the inclusion of cumulus effects on aerosols and trace gases in KF-CuP (improved RMSE for surface BC and BC profiles, surface sulfate, and surface ozone), the improved surface temperatures over sea ice (surface ozone, BC, and sulfate), and the updated trace gas deposition and UV albedo over snow and ice (improved RMSE and correlation for surface ozone). DMS emissions and chemistry improve surface sulfate at all Arctic sites except Zeppelin, and correcting aerosol sedimentation has little influence on aerosols except in the upper troposphere.« less
Assessment of bias correction under transient climate change
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2015-04-01
Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.
NASA Astrophysics Data System (ADS)
Marelle, Louis; Raut, Jean-Christophe; Law, Kathy S.; Berg, Larry K.; Fast, Jerome D.; Easter, Richard C.; Shrivastava, Manish; Thomas, Jennie L.
2017-10-01
In this study, the WRF-Chem regional model is updated to improve simulated short-lived pollutants (e.g., aerosols, ozone) in the Arctic. Specifically, we include in WRF-Chem 3.5.1 (with SAPRC-99 gas-phase chemistry and MOSAIC aerosols) (1) a correction to the sedimentation of aerosols, (2) dimethyl sulfide (DMS) oceanic emissions and gas-phase chemistry, (3) an improved representation of the dry deposition of trace gases over seasonal snow, and (4) an UV-albedo dependence on snow and ice cover for photolysis calculations. We also (5) correct the representation of surface temperatures over melting ice in the Noah Land Surface Model and (6) couple and further test the recent KF-CuP (Kain-Fritsch + Cumulus Potential) cumulus parameterization that includes the effect of cumulus clouds on aerosols and trace gases. The updated model is used to perform quasi-hemispheric simulations of aerosols and ozone, which are evaluated against surface measurements of black carbon (BC), sulfate, and ozone as well as airborne measurements of BC in the Arctic. The updated model shows significant improvements in terms of seasonal aerosol cycles at the surface and root mean square errors (RMSEs) for surface ozone, aerosols, and BC aloft, compared to the base version of the model and to previous large-scale evaluations of WRF-Chem in the Arctic. These improvements are mostly due to the inclusion of cumulus effects on aerosols and trace gases in KF-CuP (improved RMSE for surface BC and BC profiles, surface sulfate, and surface ozone), the improved surface temperatures over sea ice (surface ozone, BC, and sulfate), and the updated trace gas deposition and UV albedo over snow and ice (improved RMSE and correlation for surface ozone). DMS emissions and chemistry improve surface sulfate at all Arctic sites except Zeppelin, and correcting aerosol sedimentation has little influence on aerosols except in the upper troposphere.
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-06-01
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
Aubin, Carl-Éric; Clin, Julien; Rawlinson, Jeremy
2018-01-01
Compression-based fusionless tethers are an alternative to conventional surgical treatments of pediatric scoliosis. Anterior approaches place an anterior (ANT) tether on the anterolateral convexity of the deformed spine to modify growth. Posterior, or costo-vertebral (CV), approaches have not been assessed for biomechanical and corrective effectiveness. The objective was to biomechanically assess CV and ANT tethers using six patient-specific, finite element models of adolescent scoliotic patients (11.9 ± 0.7 years, Cobb 34° ± 10°). A validated algorithm simulated the growth and Hueter-Volkmann growth modulation over a period of 2 years with the CV and ANT tethers at two initial tensions (100, 200 N). The models without tethering also simulated deformity progression with Cobb angle increasing from 34° to 56°, axial rotation 11° to 13°, and kyphosis 28° to 32° (mean values). With the CV tether, the Cobb angle was reduced to 27° and 20° for tensions of 100 and 200 N, respectively, kyphosis to 21° and 19°, and no change in axial rotation. With the ANT tether, Cobb was reduced to 32° and 9° for 100 and 200 N, respectively, kyphosis unchanged, and axial rotation to 3° and 0°. While the CV tether mildly corrected the coronal curve over a 2-year growth period, it had sagittal lordosing effect, particularly with increasing initial axial rotation (>15°). The ANT tether achieved coronal correction, maintained kyphosis, and reduced the axial rotation, but over-correction was simulated at higher initial tensions. This biomechanical study captured the differences between a CV and ANT tether and indicated the variability arising from the patient-specific characteristics. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:254-264, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A
2017-01-21
Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-10-16
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.
Correcting Bidirectional Effects in Remote Sensing Reflectance from Coastal Waters
NASA Astrophysics Data System (ADS)
Stamnes, K. H.; Fan, Y.; Li, W.; Voss, K. J.; Gatebe, C. K.
2016-02-01
Understanding bidirectional effects including sunglint is important for GEO-CAPE for several reasons: (i) correct interpretation of ocean color data; (ii) comparing consistency of spectral radiance data derived from space observations with a single instrument for a variety of illumination and viewing conditions; (iii) merging data collected by different instruments operating simultaneously. We present a new neural network (NN) method to correct bidirectional effects in water-leaving radiance for both Case 1 and Case 2 waters. We also discuss a new BRDF and 2D sun-glint model that was validated by comparing simulated surface reflectances with Cloud Absorption Radiometer (CAR) data. Finally, we present an extension of our marine bio-optical model to the UV range that accounts for the seasonal dependence of the inherent optical properties (IOPs).
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less
Moderate forest disturbance as a stringent test for gap and big-leaf models
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B. P.; Fisk, J.; Holm, J. A.; Bailey, V. L.; Gough, C. M.
2014-12-01
Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. In particular, it is unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging U.S. forests. We tested whether three forest ecosystem models—Biome-BGC, a classic big-leaf model, and the ED and ZELIG gap-oriented models—could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experiment in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols, and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ED and ZELIG correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes. Biome-BGC net primary production (NPP) was correctly resilient, but for the wrong reasons, while ED and ZELIG exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. As a result we expect that most ecosystem models, developed to simulate processes following stand-replacing disturbances, will not simulate well the gradual and less extensive tree mortality characteristic of moderate disturbances.
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities. PMID:28082941
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.
i4OilSpill, an operational marine oil spill forecasting model for Bohai Sea
NASA Astrophysics Data System (ADS)
Yu, Fangjie; Yao, Fuxin; Zhao, Yang; Wang, Guansuo; Chen, Ge
2016-10-01
Oil spill models can effectively simulate the trajectories and fate of oil slicks, which is an essential element in contingency planning and effective response strategies prepared for oil spill accidents. However, when applied to offshore areas such as the Bohai Sea, the trajectories and fate of oil slicks would be affected by time-varying factors in a regional scale, which are assumed to be constant in most of the present models. In fact, these factors in offshore regions show much more variation over time than in the deep sea, due to offshore bathymetric and climatic characteristics. In this paper, the challenge of parameterizing these offshore factors is tackled. The remote sensing data of the region are used to analyze the modification of wind-induced drift factors, and a well-suited solution is established in parameter correction mechanism for oil spill models. The novelty of the algorithm is the self-adaptive modification mechanism of the drift factors derived from the remote sensing data for the targeted sea region, in respect to empirical constants in the present models. Considering this situation, a new regional oil spill model (i4OilSpill) for the Bohai Sea is developed, which can simulate oil transformation and fate processes by Eulerian-Lagrangian methodology. The forecasting accuracy of the proposed model is proven by the validation results in the comparison between model simulation and subsequent satellite observations on the Penglai 19-3 oil spill accident. The performance of the model parameter correction mechanism is evaluated by comparing with the real spilled oil position extracted from ASAR images.
Hamiltonian dynamics of thermostated systems: two-temperature heat-conducting phi4 chains.
Hoover, Wm G; Hoover, Carol G
2007-04-28
We consider and compare four Hamiltonian formulations of thermostated mechanics, three of them kinetic, and the other one configurational. Though all four approaches "work" at equilibrium, their application to many-body nonequilibrium simulations can fail to provide a proper flow of heat. All the Hamiltonian formulations considered here are applied to the same prototypical two-temperature "phi4" model of a heat-conducting chain. This model incorporates nearest-neighbor Hooke's-Law interactions plus a quartic tethering potential. Physically correct results, obtained with the isokinetic Gaussian and Nose-Hoover thermostats, are compared with two other Hamiltonian results. The latter results, based on constrained Hamiltonian thermostats, fail to model correctly the flow of heat.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies.
Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M; Widge, Alik S; Dougherty, Darin D; Bonmassar, Giorgio; Angelone, Leonardo M; Wald, Lawrence L
2018-05-04
We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a 'virtual CT' image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg -1 (full model, helicoidal conductors) to 43.6 kW kg -1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg -1 (full model, straight conductors) to 73.8 kW kg -1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.
Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies
NASA Astrophysics Data System (ADS)
Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M.; Widge, Alik S.; Dougherty, Darin D.; Bonmassar, Giorgio; Angelone, Leonardo M.; Wald, Lawrence L.
2018-05-01
We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a ‘virtual CT’ image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg‑1 (full model, helicoidal conductors) to 43.6 kW kg‑1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg‑1 (full model, straight conductors) to 73.8 kW kg‑1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.
Establishment and correction of an Echelle cross-prism spectrogram reduction model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng
2017-11-01
The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.
Mesh refinement in a two-dimensional large eddy simulation of a forced shear layer
NASA Technical Reports Server (NTRS)
Claus, R. W.; Huang, P. G.; Macinnes, J. M.
1989-01-01
A series of large eddy simulations are made of a forced shear layer and compared with experimental data. Several mesh densities were examined to separate the effect of numerical inaccuracy from modeling deficiencies. The turbulence model that was used to represent small scale, 3-D motions correctly predicted some gross features of the flow field, but appears to be structurally incorrect. The main effect of mesh refinement was to act as a filter on the scale of vortices that developed from the inflow boundary conditions.
Ramsey, Elijah W.; Nelson, G.
2005-01-01
To maximize the spectral distinctiveness (information) of the canopy reflectance, an atmospheric correction strategy was implemented to provide accurate estimates of the intrinsic reflectance from the Earth Observing 1 (EO1) satellite Hyperion sensor signal. In rendering the canopy reflectance, an estimate of optical depth derived from a measurement of downwelling irradiance was used to drive a radiative transfer simulation of atmospheric scattering and attenuation. During the atmospheric model simulation, the input whole-terrain background reflectance estimate was changed to minimize the differences between the model predicted and the observed canopy reflectance spectra at 34 sites. Lacking appropriate spectrally invariant scene targets, inclusion of the field and predicted comparison maximized the model accuracy and, thereby, the detail and precision in the canopy reflectance necessary to detect low percentage occurrences of invasive plants. After accounting for artifacts surrounding prominent absorption features from about 400nm to 1000nm, the atmospheric adjustment strategy correctly explained 99% of the observed canopy reflectance spectra variance. Separately, model simulation explained an average of 88%??9% of the observed variance in the visible and 98% ?? 1% in the near-infrared wavelengths. In the 34 model simulations, maximum differences between the observed and predicted reflectances were typically less than ?? 1% in the visible; however, maximum reflectance differences higher than ?? 1.6% (?2.3%) at more than a few wavelengths were observed at three sites. In the near-infrared wavelengths, maximum reflectance differences remained less than ??3% for 68% of the comparisons (??1 standard deviation) and less than ??6% for 95% of the comparisons (??2 standard deviation). Higher reflectance differences in the visible and near-infrared wavelengths were most likely associated with problems in the comparison, not in the model generation. ?? 2005 US Government.
NASA Astrophysics Data System (ADS)
Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.
2015-04-01
The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a simple chemical mechanism, taking into account the inorganic chemistry from IUPAC 2001 and a simple organic chemistry scheme including only a generic RO2 compounds for all oxidized organic trace gases; and (2) a more exhaustive chemical mechanism, based on the Master Chemical Mechanism (MCM), including the chemistry of the different trace gases used during laboratory experiments. Both mechanisms take into account self- and cross-reactions of radical species. The simulations using these mechanisms allow reproducing the magnitude of the corrections needed to account for NO interferences and a deviation from pseudo first-order kinetics, as well as their dependence on the Pyrrole-to-OH ratio and on bimolecular reaction rate constants of trace gases. The reasonable agreement found between laboratory experiments and model simulations gives confidence in the parameterizations proposed to correct the Total OH reactivity measured by CRM. However, it must be noted that the parameterizations presented in this paper are suitable for the CRM instrument used during the laboratory characterization and may be not appropriate for other CRM instruments, even if similar behaviours should be observed. It is therefore recommended that each group characterizes its own instrument following the recommendations given in this study. Finally, the assessment of the limit of detection and total uncertainties is discussed and an example of field deployment of this CRM instrument is presented.
NASA Astrophysics Data System (ADS)
McAfee, S. A.; DeLaFrance, A.
2017-12-01
Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.
Assessing the implementation of bias correction in the climate prediction
NASA Astrophysics Data System (ADS)
Nadrah Aqilah Tukimat, Nurul
2018-04-01
An issue of the climate changes nowadays becomes trigger and irregular. The increment of the greenhouse gases (GHGs) emission into the atmospheric system day by day gives huge impact to the fluctuated weather and global warming. It becomes significant to analyse the changes of climate parameters in the long term. However, the accuracy in the climate simulation is always be questioned to control the reliability of the projection results. Thus, the Linear Scaling (LS) as a bias correction method (BC) had been applied to treat the gaps between observed and simulated results. About two rainfall stations were selected in Pahang state there are Station Lubuk Paku and Station Temerloh. Statistical Downscaling Model (SDSM) used to perform the relationship between local weather and atmospheric parameters in projecting the long term rainfall trend. The result revealed the LS was successfully to reduce the error up to 3% and produced better climate simulated results.
Wu, Ying Ying; Plakseychuk, Anton; Shimada, Kenji
2014-11-01
Current external fixators for distraction osteogenesis (DO) are unable to correct all types of deformities in the lower limb and are difficult to use because of the lack of a pre-surgical planning system. We propose a DO system that consists of a surgical planner and a new, easy-to-setup unilateral fixator that not only corrects all lower limb deformity, but also generates the contralateral/predefined bone shape. Conventionally, bulky constructs with six or more joints (six degrees of freedom, 6DOF) are needed to correct a 3D deformity. By applying the axis-angle representation, we can achieve that with a compact construct with only two joints (2DOF). The proposed system makes use of computer-aided design software and computational methods to plan and simulate the planned procedure. Results of our stress analysis suggest that the stiffness of our proposed fixator is comparable to that of the Orthofix unilateral external fixator. We tested the surgical system on a model of an adult deformed tibia and the resulting bone trajectory deviates from the target bone trajectory by 1.8mm, which is below our defined threshold error of 2mm. We also extracted the transformation matrix that defines the deformity from the bone model and simulated the planned procedure. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Development of a Rolling Process Design Tool for Use in Improving Hot Roll Slab Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R; Wang, P
2003-05-06
In this quarter, our primary effort has been focused on model verification, emphasizing on consistency in result for parallel and serial simulation runs, Progress has been made in refining the parallel thermal algorithms and in diminishing discretization effects in the contact region between the rollers and slab. We have received the metrology data of the ingot profile at the end of the fifth pass from Alcoa. Detailed comparisons between the data and the initial simulation result are being performed. Forthcoming from Alcoa are modifications to the fracture model based on additional experiments at lower strain rates. The original fracture model,more » was implemented in the finite element code, but damage in the rolling simulation was not correct due to the modeling errors at lower strain rates and high stress triaxiality. Validation simulations for the fracture model will continue when the experimentally-based adjustments to the parameter values become available.« less
NASA Astrophysics Data System (ADS)
Latif, M.
2017-12-01
We investigate the influence of the Atlantic Meridional Overturning Circulation (AMOC) on the North Atlantic sector surface air temperature (SAT) in two multi-millennial control integrations of the Kiel Climate Model (KCM). One model version employs a freshwater flux correction over the North Atlantic, while the other does not. A clear influence of the AMOC on North Atlantic sector SAT only is simulated in the corrected model that depicts much reduced upper ocean salinity and temperature biases in comparison to the uncorrected model. Further, the model with much reduced biases depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector relative to the uncorrected model. The enhanced SAT predictability in the corrected model is due to a stronger and more variable AMOC and its enhanced influence on North Atlantic sea surface temperature (SST). Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SST and exhibit a smaller SAT predictability over the North Atlantic sector.
Universality of (2+1)-dimensional restricted solid-on-solid models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2016-08-01
Extensive dynamical simulations of restricted solid-on-solid models in D =2 +1 dimensions have been done using parallel multisurface algorithms implemented on graphics cards. Numerical evidence is presented that these models exhibit Kardar-Parisi-Zhang surface growth scaling, irrespective of the step heights N . We show that by increasing N the corrections to scaling increase, thus smaller step-sized models describe better the asymptotic, long-wave-scaling behavior.
King, Zachary A; O'Brien, Edward J; Feist, Adam M; Palsson, Bernhard O
2017-01-01
The metabolic byproducts secreted by growing cells can be easily measured and provide a window into the state of a cell; they have been essential to the development of microbiology, cancer biology, and biotechnology. Progress in computational modeling of cells has made it possible to predict metabolic byproduct secretion with bottom-up reconstructions of metabolic networks. However, owing to a lack of data, it has not been possible to validate these predictions across a wide range of strains and conditions. Through literature mining, we were able to generate a database of Escherichia coli strains and their experimentally measured byproduct secretions. We simulated these strains in six historical genome-scale models of E. coli, and we report that the predictive power of the models has increased as they have expanded in size and scope. The latest genome-scale model of metabolism correctly predicts byproduct secretion for 35/89 (39%) of designs. The next-generation genome-scale model of metabolism and gene expression (ME-model) correctly predicts byproduct secretion for 40/89 (45%) of designs, and we show that ME-model predictions could be further improved through kinetic parameterization. We analyze the failure modes of these simulations and discuss opportunities to improve prediction of byproduct secretion. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hlaing, Soe; Gilerson, Alexander; Harmal, Tristan; Tonizzo, Alberto; Weidemann, Alan; Arnone, Robert; Ahmed, Samir
2012-01-01
Water-leaving radiances, retrieved from in situ or satellite measurements, need to be corrected for the bidirectional properties of the measured light in order to standardize the data and make them comparable with each other. The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms specifically tuned for typical coastal waters and other case 2 conditions are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multispectral and hyperspectral radiometers, which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with average improvement of 2.4% over the spectral range. LISCO's time series data have also been used to evaluate improvements in match-up comparisons of Moderate Resolution Imaging Spectroradiometer satellite data when the proposed BRDF correction is used in lieu of the current algorithm. It is shown that the discrepancies between coincident in-situ sea-based and satellite data decreased by 3.15% with the use of the proposed algorithm.
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.
Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.
2014-12-01
This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
NASA Astrophysics Data System (ADS)
Glisan, J. M.; Gutowski, W. J.; Higgins, M.; Cassano, J. J.
2011-12-01
Pan-Arctic WRF (PAW) simulations produced using the 50-km wr50a domain developed for the fully-coupled Regional Arctic Climate Model (RACM) were found to produce deep atmospheric circulation biases over the northern Pacific Ocean, manifested in pressure, geopotential height, and temperature fields. Various remedies were unsuccessfully tested to correct these large biases, such as modifying the physical domain or using different initial/boundary conditions. Spectral (interior) nudging was introduced as a way of constraining the model to be more consistent with observed behavior. However, such control over numerical model behavior raises concerns over how much nudging may affect unforced variability and extremes. Strong nudging may reduce or filter out extreme events, since the nudging pushes the model toward a relatively smooth, large-scale state. The question then becomes - what is the minimum spectral nudging needed to correct the biases occurring on the RACM domain while not limiting PAW simulation of extreme events? To determine this, case studies were devised, using a six-member PAW ensemble on the RACM grid with varying spectral nudging strength. Two simulations were run, one in the cold season (January 2007) and one in a warm season (July 2007). Precipitation and 2-m temperature fields were extracted from the output and analyzed to determine how changing spectral nudging strength impacts both temporal and spatial temperature and precipitation extremes. The maximum and minimum temperatures at each point from among the ensemble members were examined, on the 95th confidence interval. The maximum and minimums over the simulation period will also be considered. Results suggest that there is a marked lack of sensitivity to the degrees of nudging. Moreover, it appears nudging strength can be considerably smaller than the standard strength and still produce reliably good simulations.
Schwalm, C.; Huntzinger, Deborah N.; Cook, Robert B.; ...
2015-03-11
Significant changes in the water cycle are expected under current global environmental change. Robust assessment of present-day water cycle dynamics at continental to global scales is confounded by shortcomings in the observed record. Modeled assessments also yield conflicting results which are linked to differences in model structure and simulation protocol. Here we compare simulated gridded (1 spatial resolution) runoff from six terrestrial biosphere models (TBMs), seven reanalysis products, and one gridded surface station product in the contiguous United States (CONUS) from 2001 to 2005. We evaluate the consistency of these 14 estimates with stream gauge data, both as depleted flowmore » and corrected for net withdrawals (2005 only), at the CONUS and water resource region scale, as well as examining similarity across TBMs and reanalysis products at the grid cell scale. Mean runoff across all simulated products and regions varies widely (range: 71 to 356 mm yr(-1)) relative to observed continental-scale runoff (209 or 280 mm yr(-1) when corrected for net withdrawals). Across all 14 products 8 exhibit Nash-Sutcliffe efficiency values in excess of 0.8 and three are within 10% of the observed value. Region-level mismatch exhibits a weak pattern of overestimation in western and underestimation in eastern regions although two products are systematically biased across all regions and largely scales with water use. Although gridded composite TBM and reanalysis runoff show some regional similarities, individual product values are highly variable. At the coarse scales used here we find that progress in better constraining simulated runoff requires standardized forcing data and the explicit incorporation of human effects (e.g., water withdrawals by source, fire, and land use change). (C) 2015 Elsevier B.V. All rights reserved.« less
PyNN: A Common Interface for Neuronal Network Simulators.
Davison, Andrew P; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
PyNN: A Common Interface for Neuronal Network Simulators
Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529
2010-01-01
Background This paper addresses the statistical use of accessibility and availability indices and the effect of study boundaries on these measures. The measures are evaluated via an extensive simulation based on cluster models for local outlet density. We define outlet to mean either food retail store (convenience store, supermarket, gas station) or restaurant (limited service or full service restaurants). We designed a simulation whereby a cluster outlet model is assumed in a large study window and an internal subset of that window is constructed. We performed simulations on various criteria including one scenario representing an urban area with 2000 outlets as well as a non-urban area simulated with only 300 outlets. A comparison is made between estimates obtained with the full study area and estimates using only the subset area. This allows the study of the effect of edge censoring on accessibility measures. Results The results suggest that considerable bias is found at the edges of study regions in particular for accessibility measures. Edge effects are smaller for availability measures (when not smoothed) and also for short range accessibility Conclusions It is recommended that any study utilizing these measures should correct for edge effects. The use of edge correction via guard areas is recommended and the avoidance of large range distance-based accessibility measures is also proposed. PMID:20663199
NASA Astrophysics Data System (ADS)
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2014-03-01
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
Further evidence for the increased power of LOD scores compared with nonparametric methods.
Durner, M; Vieland, V J; Greenberg, D A
1999-01-01
In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.
NASA Astrophysics Data System (ADS)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; Chang, Ping
2017-10-01
We alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ˜8-50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the global average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.
Zhang, Jian; Yang, Jianyi; Jang, Richard; Zhang, Yang
2015-01-01
SUMMARY Experimental structure determination remains very difficult for G protein-coupled receptors (GPCRs). We propose a new hybrid protocol to construct GPCR structure models that integrates experimental mutagenesis data with ab initio transmembrane (TM) helix assembly simulations. The method was tested on 24 known GPCRs where the ab initio TM-helix assembly procedure constructed the correct fold for 20 cases. When combined with weak-homology and sparse mutagenesis restraints, the method generated correct folds for all the tested cases with an average C-alpha RMSD 2.4 Å in the TM-regions. The new hybrid protocol was applied to model all 1026 GPCRs in the human genome, where 923 have a high confidence score that are expected to have correct folds; these contain many pharmaceutically important families with no previously solved structures, including Trace amine, Prostanoids, Releasing hormones, Melanocortins, Vasopressin and Neuropeptide Y receptors. The results demonstrate new progress on genome-wide structure modeling of transmembrane proteins. PMID:26190572
Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method
NASA Astrophysics Data System (ADS)
Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun
2017-10-01
Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.
Pulawski, Wojciech; Jamroz, Michal; Kolinski, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2016-11-28
The CABS coarse-grained model is a well-established tool for modeling globular proteins (predicting their structure, dynamics, and interactions). Here we introduce an extension of the CABS representation and force field (CABS-membrane) to the modeling of the effect of the biological membrane environment on the structure of membrane proteins. We validate the CABS-membrane model in folding simulations of 10 short helical membrane proteins not using any knowledge about their structure. The simulations start from random protein conformations placed outside the membrane environment and allow for full flexibility of the modeled proteins during their spontaneous insertion into the membrane. In the resulting trajectories, we have found models close to the experimental membrane structures. We also attempted to select the correctly folded models using simple filtering followed by structural clustering combined with reconstruction to the all-atom representation and all-atom scoring. The CABS-membrane model is a promising approach for further development toward modeling of large protein-membrane systems.
Accuracy of 1D microvascular flow models in the limit of low Reynolds numbers.
Pindera, Maciej Z; Ding, Hui; Athavale, Mahesh M; Chen, Zhijian
2009-05-01
We describe results of numerical simulations of steady flows in tubes with branch bifurcations using fully 3D and reduced 1D geometries. The intent is to delineate the range of validity of reduced models used for simulations of flows in microcapillary networks, as a function of the flow Reynolds number Re. Results from model problems indicate that for Re less than 1 and possibly as high as 10, vasculatures may be represented by strictly 1D Poiseuille flow geometries with flow variation in the axial dimensions only. In that range flow rate predictions in the different branches generated by 1D and 3D models differ by a constant factor, independent of Re. When the cross-sectional areas of the branches are constant these differences are generally small and appear to stem from an uncertainty of how the individual branch lengths are defined. This uncertainty can be accounted for by a simple geometrical correction. For non-constant cross-sections the differences can be much more significant. If additional corrections for the presence of branch junctions and flow area variations are not taken into account in 1D models of complex vasculatures, the resultant flow predictions should be interpreted with caution.
NASA Astrophysics Data System (ADS)
Braun, Marco; Chaumont, Diane
2013-04-01
Using climate model output to explore climate change impacts on hydrology requires several considerations, choices and methods in the post treatment of the datasets. In the effort of producing a comprehensive data base of climate change scenarios for over 300 watersheds in the Canadian province of Québec, a selection of state of the art procedures were applied to an ensemble comprising 87 climate simulations. The climate data ensemble is based on global climate simulations from the Coupled Model Intercomparison Project - Phase 3 (CMIP3) and regional climate simulations from the North American Regional Climate Change Assessment Program (NARCCAP) and operational simulations produced at Ouranos. Information on the response of hydrological systems to changing climate conditions can be derived by linking climate simulations with hydrological models. However, the direct use of raw climate model output variables as drivers for hydrological models is limited by issues such as spatial resolution and the calibration of hydro models with observations. Methods for downscaling and bias correcting the data are required to achieve seamless integration of climate simulations with hydro models. The effects on the results of four different approaches to data post processing were explored and compared. We present the lessons learned from building the largest data base yet for multiple stakeholders in the hydro power and water management sector in Québec putting an emphasis on the benefits and pitfalls in choosing simulations, extracting the data, performing bias corrections and documenting the results. A discussion of the sources and significance of uncertainties in the data will also be included. The climatological data base was subsequently used by the state owned hydro power company Hydro-Québec and the Centre d'expertise hydrique du Québec (CEHQ), the provincial water authority, to simulate future stream flows and analyse the impacts on hydrological indicators. While this submission focuses on the production of climatic scenarios for application in hydrology, the submission « The (cQ)2 project: assessing watershed scale hydrological changes for the province of Québec at the 2050 horizon, a collaborative framework » by Catherine Guay describes how Hydro-Québec and CEHQ put the data into use.
Virtual milk for modelling and simulation of dairy processes.
Munir, M T; Zhang, Y; Yu, W; Wilson, D I; Young, B R
2016-05-01
The modeling of dairy processing using a generic process simulator suffers from shortcomings, given that many simulators do not contain milk components in their component libraries. Recently, pseudo-milk components for a commercial process simulator were proposed for simulation and the current work extends this pseudo-milk concept by studying the effect of both total milk solids and temperature on key physical properties such as thermal conductivity, density, viscosity, and heat capacity. This paper also uses expanded fluid and power law models to predict milk viscosity over the temperature range from 4 to 75°C and develops a succinct regressed model for heat capacity as a function of temperature and fat composition. The pseudo-milk was validated by comparing the simulated and actual values of the physical properties of milk. The milk thermal conductivity, density, viscosity, and heat capacity showed differences of less than 2, 4, 3, and 1.5%, respectively, between the simulated results and actual values. This work extends the capabilities of the previously proposed pseudo-milk and of a process simulator to model dairy processes, processing different types of milk (e.g., whole milk, skim milk, and concentrated milk) with different intrinsic compositions, and to predict correct material and energy balances for dairy processes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hybrid molecular-continuum simulations using smoothed dissipative particle dynamics
Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott
2015-01-01
We present a new multiscale simulation methodology for coupling a region with atomistic detail simulated via molecular dynamics (MD) to a numerical solution of the fluctuating Navier-Stokes equations obtained from smoothed dissipative particle dynamics (SDPD). In this approach, chemical potential gradients emerge due to differences in resolution within the total system and are reduced by introducing a pairwise thermodynamic force inside the buffer region between the two domains where particles change from MD to SDPD types. When combined with a multi-resolution SDPD approach, such as the one proposed by Kulkarni et al. [J. Chem. Phys. 138, 234105 (2013)], this method makes it possible to systematically couple atomistic models to arbitrarily coarse continuum domains modeled as SDPD fluids with varying resolution. We test this technique by showing that it correctly reproduces thermodynamic properties across the entire simulation domain for a simple Lennard-Jones fluid. Furthermore, we demonstrate that this approach is also suitable for non-equilibrium problems by applying it to simulations of the start up of shear flow. The robustness of the method is illustrated with two different flow scenarios in which shear forces act in directions parallel and perpendicular to the interface separating the continuum and atomistic domains. In both cases, we obtain the correct transient velocity profile. We also perform a triple-scale shear flow simulation where we include two SDPD regions with different resolutions in addition to a MD domain, illustrating the feasibility of a three-scale coupling. PMID:25637963
Tikhonov, Denis S; Sharapa, Dmitry I; Schwabedissen, Jan; Rybkin, Vladimir V
2016-10-12
In this study, we investigate the ability of classical molecular dynamics (MD) and Monte-Carlo (MC) simulations for modeling the intramolecular vibrational motion. These simulations were used to compute thermally-averaged geometrical structures and infrared vibrational intensities for a benchmark set previously studied by gas electron diffraction (GED): CS 2 , benzene, chloromethylthiocyanate, pyrazinamide and 9,12-I 2 -1,2-closo-C 2 B 10 H 10 . The MD sampling of NVT ensembles was performed using chains of Nose-Hoover thermostats (NH) as well as the generalized Langevin equation thermostat (GLE). The performance of the theoretical models based on the classical MD and MC simulations was compared with the experimental data and also with the alternative computational techniques: a conventional approach based on the Taylor expansion of potential energy surface, path-integral MD and MD with quantum-thermal bath (QTB) based on the generalized Langevin equation (GLE). A straightforward application of the classical simulations resulted, as expected, in poor accuracy of the calculated observables due to the complete neglect of quantum effects. However, the introduction of a posteriori quantum corrections significantly improved the situation. The application of these corrections for MD simulations of the systems with large-amplitude motions was demonstrated for chloromethylthiocyanate. The comparison of the theoretical vibrational spectra has revealed that the GLE thermostat used in this work is not applicable for this purpose. On the other hand, the NH chains yielded reasonably good results.
Evaluations of high-resolution dynamically downscaled ensembles over the contiguous United States
NASA Astrophysics Data System (ADS)
Zobel, Zachary; Wang, Jiali; Wuebbles, Donald J.; Kotamarthi, V. Rao
2018-02-01
This study uses Weather Research and Forecast (WRF) model to evaluate the performance of six dynamical downscaled decadal historical simulations with 12-km resolution for a large domain (7200 × 6180 km) that covers most of North America. The initial and boundary conditions are from three global climate models (GCMs) and one reanalysis data. The GCMs employed in this study are the Geophysical Fluid Dynamics Laboratory Earth System Model with Generalized Ocean Layer Dynamics component, Community Climate System Model, version 4, and the Hadley Centre Global Environment Model, version 2-Earth System. The reanalysis data is from the National Centers for Environmental Prediction-US. Department of Energy Reanalysis II. We analyze the effects of bias correcting, the lateral boundary conditions and the effects of spectral nudging. We evaluate the model performance for seven surface variables and four upper atmospheric variables based on their climatology and extremes for seven subregions across the United States. The results indicate that the simulation's performance depends on both location and the features/variable being tested. We find that the use of bias correction and/or nudging is beneficial in many situations, but employing these when running the RCM is not always an improvement when compared to the reference data. The use of an ensemble mean and median leads to a better performance in measuring the climatology, while it is significantly biased for the extremes, showing much larger differences than individual GCM driven model simulations from the reference data. This study provides a comprehensive evaluation of these historical model runs in order to make informed decisions when making future projections.
United States Air Force Graduate Student Research Program. 1989 Program Technical Report. Volume 1
1989-12-01
Analysis is required to supplement the experimental observations, which requires the formulation of a realistic model of the physical problem...RECOMMENDATION: a . From our point of view, the research team considere the NASTRAN model correct due to the vibrational frequencies, but we are still...structure of the program was understood, attempts were made to change the model from a thunderstorm simulation
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Classical and quantum simulations of warm dense carbon
NASA Astrophysics Data System (ADS)
Whitley, Heather; Sanchez, David; Hamel, Sebastien; Correa, Alfredo; Benedict, Lorin
We have applied classical and DFT-based molecular dynamics (MD) simulations to study the equation of state of carbon in the warm dense matter regime (ρ = 3.7 g/cc, 0.86 eV
A two-dimensional, finite-difference model simulating a highway has been developed which is able to handle linear and nonlinear chemical reactions. Transport of the pollutants is accomplished by use of an upstream-flux-corrected algorithm developed at the Naval Research Laborator...
An Improved K-Epsilon Model for Near-Wall Turbulence and Comparison with Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Shih, T. H.
1990-01-01
An improved k-epsilon model for low Reynolds number turbulence near a wall is presented. The near-wall asymptotic behavior of the eddy viscosity and the pressure transport term in the turbulent kinetic energy equation is analyzed. Based on this analysis, a modified eddy viscosity model, having correct near-wall behavior, is suggested, and a model for the pressure transport term in the k-equation is proposed. In addition, a modeled dissipation rate equation is reformulated. Fully developed channel flows were used for model testing. The calculations using various k-epsilon models are compared with direct numerical simulations. The results show that the present k-epsilon model performs well in predicting the behavior of near-wall turbulence. Significant improvement over previous k-epsilon models is obtained.
On simulations of rarefied vapor flows with condensation
NASA Astrophysics Data System (ADS)
Bykov, Nikolay; Gorbachev, Yuriy; Fyodorov, Stanislav
2018-05-01
Results of the direct simulation Monte Carlo of 1D spherical and 2D axisymmetric expansions into vacuum of condens-ing water vapor are presented. Two models based on the kinetic approach and the size-corrected classical nucleation theory are employed for simulations. The difference in obtained results is discussed and advantages of the kinetic approach in comparison with the modified classical theory are demonstrated. The impact of clusterization on flow parameters is observed when volume fraction of clusters in the expansion region exceeds 5%. Comparison of the simulation data with the experimental results demonstrates good agreement.
Bypass Transitional Flow Calculations Using a Navier-Stokes Solver and Two-Equation Models
NASA Technical Reports Server (NTRS)
Liuo, William W.; Shih, Tsan-Hsing; Povinelli, L. A. (Technical Monitor)
2000-01-01
Bypass transitional flows over a flat plate were simulated using a Navier-Stokes solver and two equation models. A new model for the bypass transition, which occurs in cases with high free stream turbulence intensity (TI), is described. The new transition model is developed by including an intermittency correction function to an existing two-equation turbulence model. The advantages of using Navier-Stokes equations, as opposed to boundary-layer equations, in bypass transition simulations are also illustrated. The results for two test flows over a flat plate with different levels of free stream turbulence intensity are reported. Comparisons with the experimental measurements show that the new model can capture very well both the onset and the length of bypass transition.
Mean ionic activity coefficients in aqueous NaCl solutions from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mester, Zoltan; Panagiotopoulos, Athanassios Z., E-mail: azp@princeton.edu
The mean ionic activity coefficients of aqueous NaCl solutions of varying concentrations at 298.15 K and 1 bar have been obtained from molecular dynamics simulations by gradually turning on the interactions of an ion pair inserted into the solution. Several common non-polarizable water and ion models have been used in the simulations. Gibbs-Duhem equation calculations of the thermodynamic activity of water are used to confirm the thermodynamic consistency of the mean ionic activity coefficients. While the majority of model combinations predict the correct trends in mean ionic activity coefficients, they overestimate their values at high salt concentrations. The solubility predictionsmore » also suffer from inaccuracies, with all models underpredicting the experimental values, some by large factors. These results point to the need for further ion and water model development.« less
In-depth analysis and modelling of self-heating effects in nanometric DGMOSFETs
NASA Astrophysics Data System (ADS)
Roldán, J. B.; González, B.; Iñiguez, B.; Roldán, A. M.; Lázaro, A.; Cerdeira, A.
2013-01-01
Self-heating effects (SHEs) in nanometric symmetrical double-gate MOSFETs (DGMOSFETs) have been analysed. An equivalent thermal circuit for the transistors has been developed to characterise thermal effects, where the temperature and thickness dependency of the thermal conductivity of the silicon and oxide layers within the devices has been included. The equivalent thermal circuit is consistent with simulations using a commercial technology computer-aided design (TCAD) tool (Sentaurus by Synopsys). In addition, a model for DGMOSFETs has been developed where SHEs have been considered in detail, taking into account the temperature dependence of the low-field mobility, saturation velocity, and inversion charge. The model correctly reproduces Sentaurus simulation data for the typical bias range used in integrated circuits. Lattice temperatures predicted by simulation are coherently reproduced by the model for varying silicon layer geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less
NASA Astrophysics Data System (ADS)
Rytka, C.; Lungershausen, J.; Kristiansen, P. M.; Neyer, A.
2016-06-01
Flow simulations can cut down both costs and time for the development of injection moulded polymer parts with functional surfaces used in life science and optical applications. We simulated the polymer melt flow into 3D micro- and nanostructures with Moldflow and Comsol and compared the results to real iso- and variothermal injection moulding trials below, at and above the transition temperature of the polymer. By adjusting the heat transfer coefficient and the transition temperature in the simulation it was possible to achieve good correlation with experimental findings at different processing conditions (mould temperature, injection velocity) for two polymers, namely polymethylmethacrylate and amorphous polyamide. The macroscopic model can be scaled down in volume and number of elements to save computational time for microstructure simulation and to enable first and foremost the nanostructure simulation, as long as local boundary conditions such as flow front speed are transferred correctly. The heat transfer boundary condition used in Moldflow was further evaluated in Comsol. Results showed that the heat transfer coefficient needs to be increased compared to macroscopic moulding in order to represent interfacial polymer/mould effects correctly. The transition temperature is most important in the packing phase for variothermal injection moulding.
The effect of anthropogenic emissions corrections on the seasonal cycle of atmospheric CO2
NASA Astrophysics Data System (ADS)
Brooks, B. J.; Hoffman, F. M.; Mills, R. T.; Erickson, D. J.; Blasing, T. J.
2009-12-01
A previous study (Erickson et al. 2008) approximated the monthly global emission estimates of anthropogenic CO2 by applying a 2-harmonic Fourier expansion with coefficients as a function of latitude to annual CO2 flux estimates derived from United States data (Blasing et al. 2005) that were extrapolated globally. These monthly anthropogenic CO2 flux estimates were used to model atmospheric concentrations using the NASA GEOS-4 data assimilation system. Local variability in the amplitude of the simulated CO2 seasonal cycle were found to be on the order of 2-6 ppmv. Here we used the same Fourier expansion to seasonally adjust the global annual fossil fuel CO2 emissions from the SRES A2 scenario. For a total of four simulations, both the annual and seasonalized fluxes were advected in two configurations of the NCAR Community Atmosphere Model (CAM) used in the Carbon-Land Model Intercomparison Project (C-LAMP). One configuration used the NCAR Community Land Model (CLM) coupled with the CASA‧ (carbon only) biogeochemistry model and the other used CLM coupled with the CN (coupled carbon and nitrogen cycles) biogeochemistry model. All four simulations were forced with observed sea surface temperatures and sea ice concentrations from the Hadley Centre and a prescribed transient atmospheric CO2 concentration for the radiation and land forcing over the 20th century. The model results exhibit differences in the seasonal cycle of CO2 between the seasonally corrected and uncorrected simulations. Moreover, because of differing energy and water feedbacks between the atmosphere model and the two land biogeochemistry models, features of the CO2 seasonal cycle were different between these two model configurations. This study reinforces previous findings that suggest that regional near-surface atmospheric CO2 concentrations depend strongly on the natural sources and sinks of CO2, but also on the strength of local anthropogenic CO2 emissions and geographic position. This work further attests to the need for remotely sensed CO2 observations from space.
Finke, John M; Cheung, Margaret S; Onuchic, José N
2004-09-01
Modeling the structure of natively disordered peptides has proved difficult due to the lack of structural information on these peptides. In this work, we use a novel application of the host-guest method, combining folding theory with experiments, to model the structure of natively disordered polyglutamine peptides. Initially, a minimalist molecular model (C(alpha)C(beta)) of CI2 is developed with a structurally based potential and captures many of the folding properties of CI2 determined from experiments. Next, polyglutamine "guest" inserts of increasing length are introduced into the CI2 "host" model and the polyglutamine is modeled to match the resultant change in CI2 thermodynamic stability between simulations and experiments. The polyglutamine model that best mimics the experimental changes in CI2 thermodynamic stability has 1), a beta-strand dihedral preference and 2), an attractive energy between polyglutamine atoms 0.75-times the attractive energy between the CI2 host Go-contacts. When free-energy differences in the CI2 host-guest system are correctly modeled at varying lengths of polyglutamine guest inserts, the kinetic folding rates and structural perturbation of these CI2 insert mutants are also correctly captured in simulations without any additional parameter adjustment. In agreement with experiments, the residues showing structural perturbation are located in the immediate vicinity of the loop insert. The simulated polyglutamine loop insert predominantly adopts extended random coil conformations, a structural model consistent with low resolution experimental methods. The agreement between simulation and experimental CI2 folding rates, CI2 structural perturbation, and polyglutamine insert structure show that this host-guest method can select a physically realistic model for inserted polyglutamine. If other amyloid peptides can be inserted into stable protein hosts and the stabilities of these host-guest mutants determined, this novel host-guest method may prove useful to determine structural preferences of these intractable but biologically relevant protein fragments.
Underwater terrain-aided navigation system based on combination matching algorithm.
Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao
2018-07-01
Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.
Klein, Daniel J.; Baym, Michael; Eckhoff, Philip
2014-01-01
Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087
NASA Astrophysics Data System (ADS)
Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.
2015-08-01
We introduce a sub-grid force correction term to better model the dynamical friction experienced by a supermassive black hole (SMBH) as it orbits within its host galaxy. This new approach accurately follows an SMBH's orbital decay and drastically improves over commonly used `advection' methods. The force correction introduced here naturally scales with the force resolution of the simulation and converges as resolution is increased. In controlled experiments, we show how the orbital decay of the SMBH closely follows analytical predictions when particle masses are significantly smaller than that of the SMBH. In a cosmological simulation of the assembly of a small galaxy, we show how our method allows for realistic black hole orbits. This approach overcomes the limitations of the advection scheme, where black holes are rapidly and artificially pushed towards the halo centre and then forced to merge, regardless of their orbits. We find that SMBHs from merging dwarf galaxies can spend significant time away from the centre of the remnant galaxy. Improving the modelling of SMBH orbital decay will help in making robust predictions of the growth, detectability and merger rates of SMBHs, especially at low galaxy masses or at high redshift.
Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty
NASA Astrophysics Data System (ADS)
Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.
2013-12-01
Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.
Del Bello, Elisabetta; Taddeucci, Jacopo; de’ Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio
2017-01-01
Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕp) ranging 10−7-10−3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕp ~ 10−3. Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕp. Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕp. These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters. PMID:28045056
Del Bello, Elisabetta; Taddeucci, Jacopo; De' Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio
2017-01-03
Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕ p ) ranging 10 -7 -10 -3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕ p ~ 10 -3 . Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕ p . Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕ p . These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters.
NASA Technical Reports Server (NTRS)
Atencio, A., Jr.; Soderman, P. T.
1973-01-01
A method to determine free-field aircraft noise spectra from wind-tunnel measurements has been developed. The crux of the method is the correction for reverberations. Calibrated loud speakers are used to simulate model sound sources in the wind tunnel. Corrections based on the difference between the direct and reverberant field levels are applied to wind-tunnel data for a wide range of aircraft noise sources. To establish the validity of the correction method, two research aircraft - one propeller-driven (YOV-10A) and one turbojet-powered (XV-5B) - were flown in free field and then tested in the wind tunnel. Corrected noise spectra from the two environments agree closely.
van Heeswijk, Marijke
2006-01-01
Surface water has been diverted from the Salmon Creek Basin for irrigation purposes since the early 1900s, when the Bureau of Reclamation built the Okanogan Project. Spring snowmelt runoff is stored in two reservoirs, Conconully Reservoir and Salmon Lake Reservoir, and gradually released during the growing season. As a result of the out-of-basin streamflow diversions, the lower 4.3 miles of Salmon Creek typically has been a dry creek bed for almost 100 years, except during the spring snowmelt season during years of high runoff. To continue meeting the water needs of irrigators but also leave water in lower Salmon Creek for fish passage and to help restore the natural ecosystem, changes are being considered in how the Okanogan Project is operated. This report documents development of a precipitation-runoff model for the Salmon Creek Basin that can be used to simulate daily unregulated streamflows. The precipitation-runoff model is a component of a Decision Support System (DSS) that includes a water-operations model the Bureau of Reclamation plans to develop to study the water resources of the Salmon Creek Basin. The DSS will be similar to the DSS that the Bureau of Reclamation and the U.S. Geological Survey developed previously for the Yakima River Basin in central southern Washington. The precipitation-runoff model was calibrated for water years 1950-89 and tested for water years 1990-96. The model was used to simulate daily streamflows that were aggregated on a monthly basis and calibrated against historical monthly streamflows for Salmon Creek at Conconully Dam. Additional calibration data were provided by the snowpack water-equivalent record for a SNOTEL station in the basin. Model input time series of daily precipitation and minimum and maximum air temperatures were based on data from climate stations in the study area. Historical records of unregulated streamflow for Salmon Creek at Conconully Dam do not exist for water years 1950-96. Instead, estimates of historical monthly mean unregulated streamflow based on reservoir outflows and storage changes were used as a surrogate for the missing data and to calibrate and test the model. The estimated unregulated streamflows were corrected for evaporative losses from Conconully Reservoir (about 1 ft3/s) and ground-water losses from the basin (about 2 ft3/s). The total of the corrections was about 9 percent of the mean uncorrected streamflow of 32.2 ft3/s (23,300 acre-ft/yr) for water years 1949-96. For the calibration period, the basinwide mean annual evapotranspiration was simulated to be 19.1 inches, or about 83 percent of the mean annual precipitation of 23.1 inches. Model calibration and testing indicated that the daily streamflows simulated using the precipitation-runoff model should be used only to analyze historical and forecasted annual mean and April-July mean streamflows for Salmon Creek at Conconully Dam. Because of the paucity of model input data and uncertainty in the estimated unregulated streamflows, the model is not adequately calibrated and tested to estimate monthly mean streamflows for individual months, such as during low-flow periods, or for shorter periods such as during peak flows. No data were available to test the accuracy of simulated streamflows for lower Salmon Creek. As a result, simulated streamflows for lower Salmon Creek should be used with caution. For the calibration period (water years 1950-89), both the simulated mean annual streamflow and the simulated mean April-July streamflow compared well with the estimated uncorrected unregulated streamflow (UUS) and corrected unregulated streamflow (CUS). The simulated mean annual streamflow exceeded UUS by 5.9 percent and was less than CUS by 2.7 percent. Similarly, the simulated mean April-July streamflow exceeded UUS by 1.8 percent and was less than CUS by 3.1 percent. However, streamflow was significantly undersimulated during the low-flow, baseflow-dominated months of November through F
Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir
2009-11-01
Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
NASA Astrophysics Data System (ADS)
Kikuchi, N.; Yoshida, Y.; Uchino, O.; Morino, I.; Yokota, T.
2016-11-01
We present an algorithm for retrieving column-averaged dry air mole fraction of carbon dioxide (XCO2) and methane (XCH4) from reflected spectra in the shortwave infrared (SWIR) measured by the TANSO-FTS (Thermal And Near infrared Sensor for carbon Observation Fourier Transform Spectrometer) sensor on board the Greenhouse gases Observing SATellite (GOSAT). The algorithm uses the two linear polarizations observed by TANSO-FTS to improve corrections to the interference effects of atmospheric aerosols, which degrade the accuracy in the retrieved greenhouse gas concentrations. To account for polarization by the land surface reflection in the forward model, we introduced a bidirectional reflection matrix model that has two parameters to be retrieved simultaneously with other state parameters. The accuracy in XCO2 and XCH4 values retrieved with the algorithm was evaluated by using simulated retrievals over both land and ocean, focusing on the capability of the algorithm to correct imperfect prior knowledge of aerosols. To do this, we first generated simulated TANSO-FTS spectra using a global distribution of aerosols computed by the aerosol transport model SPRINTARS. Then the simulated spectra were submitted to the algorithms as measurements both with and without polarization information, adopting a priori profiles of aerosols that differ from the true profiles. We found that the accuracy of XCO2 and XCH4, as well as profiles of aerosols, retrieved with polarization information was considerably improved over values retrieved without polarization information, for simulated observations over land with aerosol optical thickness greater than 0.1 at 1.6 μm.
Based new WiMax simulation model to investigate Qos with OPNET modeler in sheduling environment
NASA Astrophysics Data System (ADS)
Saini, Sanju; Saini, K. K.
2012-11-01
WiMAX stands for World Interoperability for Microwave Access. It is considered a major part of broadband wireless network having the IEEE 802.16 standard. WiMAX provides innovative, fixed as well as mobile platforms for broadband internet access anywhere anytime with different transmission modes. The results show approximately equal load and throughput while the delay values vary among the different Base Stations Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of this algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.