Sample records for simulation output accuracy

  1. An evaluation of information retrieval accuracy with simulated OCR output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, W.B.; Harding, S.M.; Taghva, K.

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents canmore » result in significant degradation.« less

  2. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  3. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  4. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  5. Using a Gaussian Process Emulator for Data-driven Surrogate Modelling of a Complex Urban Drainage Simulator

    NASA Astrophysics Data System (ADS)

    Bellos, V.; Mahmoodian, M.; Leopold, U.; Torres-Matallana, J. A.; Schutz, G.; Clemens, F.

    2017-12-01

    Surrogate models help to decrease the run-time of computationally expensive, detailed models. Recent studies show that Gaussian Process Emulators (GPE) are promising techniques in the field of urban drainage modelling. However, this study focusses on developing a GPE-based surrogate model for later application in Real Time Control (RTC) using input and output time series of a complex simulator. The case study is an urban drainage catchment in Luxembourg. A detailed simulator, implemented in InfoWorks ICM, is used to generate 120 input-output ensembles, from which, 100 are used for training the emulator and 20 for validation of the results. An ensemble of historical rainfall events with 2 hours duration and 10 minutes time steps are considered as the input data. Two example outputs, are selected as wastewater volume and total COD concentration in a storage tank in the network. The results of the emulator are tested with unseen random rainfall events from the ensemble dataset. The emulator is approximately 1000 times faster than the original simulator for this small case study. Whereas the overall patterns of the simulator are matched by the emulator, in some cases the emulator deviates from the simulator. To quantify the accuracy of the emulator in comparison with the original simulator, Nash-Sutcliffe efficiency (NSE) between the emulator and simulator is calculated for unseen rainfall scenarios. The range of NSE for the case of tank volume is from 0.88 to 0.99 with a mean value of 0.95, whereas for COD is from 0.71 to 0.99 with a mean value of 0.92. The emulator is able to predict the tank volume with higher accuracy as the relationship between rainfall intensity and tank volume is linear. For COD, which has a non-linear behaviour, the predictions are less accurate and more uncertain, in particular when rainfall intensity increases. This predictions were improved by including a larger amount of training data for the higher rainfall intensities. It was observed that, the accuracy of the emulator predictions depends on the ensemble training dataset design and the amount of data fed. Finally, more investigation is required to test the possibility of applying this type of fast emulators for model-based RTC applications in which limited number of inputs and outputs are considered in a short prediction horizon.

  6. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  7. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  8. Two high accuracy digital integrators for Rogowski current transducers.

    PubMed

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  9. Two high accuracy digital integrators for Rogowski current transducers

    NASA Astrophysics Data System (ADS)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  10. Method for Prediction of the Power Output from Photovoltaic Power Plant under Actual Operating Conditions

    NASA Astrophysics Data System (ADS)

    Obukhov, S. G.; Plotnikov, I. A.; Surzhikova, O. A.; Savkin, K. D.

    2017-04-01

    Solar photovoltaic technology is one of the most rapidly growing renewable sources of electricity that has practical application in various fields of human activity due to its high availability, huge potential and environmental compatibility. The original simulation model of the photovoltaic power plant has been developed to simulate and investigate the plant operating modes under actual operating conditions. The proposed model considers the impact of the external climatic factors on the solar panel energy characteristics that improves accuracy in the power output prediction. The data obtained through the photovoltaic power plant operation simulation enable a well-reasoned choice of the required capacity for storage devices and determination of the rational algorithms to control the energy complex.

  11. SimBA: simulation algorithm to fit extant-population distributions.

    PubMed

    Parida, Laxmi; Haiminen, Niina

    2015-03-14

    Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .

  12. Effect of seabed roughness on tidal current turbines

    NASA Astrophysics Data System (ADS)

    Gupta, Vikrant; Wan, Minping

    2017-11-01

    Tidal current turbines are shown to have potential to generate clean energy for a negligible environmental impact. These devices, however, operate in high to moderate current regions where the flow is highly turbulent. It has been shown in flume tank experiments at IFREMER in Boulogne-Sur-Mer (France) and NAFL in the University of Minnesota (US) that the level of turbulence and boundary layer profile affect a turbine's power output and wake characteristics. A major factor that determines these marine flow characteristics is the seabed roughness. Experiments, however, cannot simulate the high Reynolds number conditions of real marine flows. For that, we rely on numerical simulations. High accuracy numerical methods, such as DNS, of wall-bounded flows are very expensive, where the number of grid-points needed to resolve the flow varies as (Re) 9 / 4 (where Re is the flow Reynolds number). While numerically affordable RANS methods compromise on accuracy. Wall-modelled LES methods, which provide both accuracy and affordability, have been improved tremendously in the recent years. We discuss the application of such numerical methods for studying the effect of seabed roughness on marine flow features and their impact on turbine power output and wake characteristics. NSFC, Project Number 11672123.

  13. Reexamination of the 9 10 November 1975 “Edmund Fitzgerald” Storm Using Today's Technology.

    NASA Astrophysics Data System (ADS)

    Hultquist, Thomas R.; Dutter, Michael R.; Schwab, David J.

    2006-05-01

    There has been considerable debate over the past three decades concerning the specific cause of the loss of the ship the Edmund Fitzgerald on Lake Superior on 10 November 1975, but there is little question that weather played a role in the disaster. There were only a few surface observations available during the height of the storm, so it is difficult to assess the true severity and meteorological rarity of the event. In order to identify likely weather conditions that occurred during the storm of 9-10 November 1975, high-resolution numerical simulations were conducted in an attempt to assess wind and wave conditions throughout the storm. Comparisons are made between output from the model simulations and available observational data from the event to assess the accuracy of the simulations. Given a favorable comparison, more detailed output from the simulations is presented, with a focus on high-resolution output over Lake Superior between 1800 UTC 9 November 1975 and 0600 UTC 11 November 1975. A detailed analysis of low-level sustained wind and significant wave height output is presented, illustrating the severity of the conditions and speed with which they developed and later subsided during the event. The high temporal and spatial resolution of the model output helps provide a more detailed depiction of conditions on Lake Superior than has previously been available.

  14. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  15. A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.

    PubMed

    Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine

    2014-04-01

    In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.

  16. A very low noise, high accuracy, programmable voltage source for low frequency noise measurements

    NASA Astrophysics Data System (ADS)

    Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine

    2014-04-01

    In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.

  17. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Singh, H; Islam, M

    2014-06-01

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less

  18. Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.

    2017-04-01

    To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.

  19. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  20. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  1. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  2. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles

    PubMed Central

    Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  3. Development of a 402.5 MHz 140 kW Inductive Output Tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Lawrence Ives; Michael Read, Robert Jackson

    2012-05-09

    This report contains the results of Phase I of an SBIR to develop a Pulsed Inductive Output Tube (IOT) with 140 kW at 400 MHz for powering H-proton beams. A number of sources, including single beam and multiple beam klystrons, can provide this power, but the IOT provides higher efficiency. Efficiencies exceeding 70% are routinely achieved. The gain is typically limited to approximately 24 dB; however, the availability of highly efficient, solid state drivers reduces the significance of this limitation, particularly at lower frequencies. This program initially focused on developing a 402 MHz IOT; however, the DOE requirement for thismore » device was terminated during the program. The SBIR effort was refocused on improving the IOT design codes to more accurately simulate the time dependent behavior of the input cavity, electron gun, output cavity, and collector. Significant improvement was achieved in modeling capability and simulation accuracy.« less

  4. Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2015-01-01

    In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.

  5. Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.

    PubMed

    Zhang, Xuyang; Goh, Kean S

    2015-11-01

    Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  7. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  8. Use of Regional Climate Model Output for Hydrologic Simulations

    NASA Astrophysics Data System (ADS)

    Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.

    2001-12-01

    Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  9. Simulation study on the maximum continuous working condition of a power plant boiler

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Han, Jiting; Sun, Haitian; Cheng, Jiwei; Jing, Ying'ai; Li, Wenbo

    2018-05-01

    First of all, the boiler is briefly introduced to determine the mathematical model and the boundary conditions, then the boiler under the BMCR condition numerical simulation study, and then the BMCR operating temperature field analysis. According to the boiler actual test results and the hot BMCR condition boiler output test results, the simulation results are verified. The main conclusions are as follows: the position and size of the inscribed circle in the furnace and the furnace temperature distribution and test results under different elevation are compared and verified; Accuracy of numerical simulation results.

  10. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  11. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  12. Voltage controlled current source

    DOEpatents

    Casne, Gregory M.

    1992-01-01

    A seven decade, voltage controlled current source is described for use in testing intermediate range nuclear instruments that covers the entire test current range of from 10 picoamperes to 100 microamperes. High accuracy is obtained throughout the entire seven decades of output current with circuitry that includes a coordinated switching scheme responsive to the input signal from a hybrid computer to control the input voltage to an antilog amplifier, and to selectively connect a resistance to the antilog amplifier output to provide a continuous output current source as a function of a preset range of input voltage. An operator controlled switch provides current adjustment for operation in either a real-time simulation test mode or a time response test mode.

  13. Development of digital phantoms based on a finite element model to simulate low-attenuation areas in CT imaging for pulmonary emphysema quantification.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2017-09-01

    To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.

  14. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  15. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  16. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  17. Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers

    DOE PAGES

    Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...

    2016-01-28

    Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less

  18. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  19. Learning About Ares I from Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Hall, Charlie E.

    2008-01-01

    This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.

  20. JPL Energy Consumption Program (ECP) documentation: A computer model simulating heating, cooling and energy loads in buildings. [low cost solar array efficiency

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Chai, V. W.; Lascu, D.; Urbenajo, R.; Wong, P.

    1978-01-01

    The engineering manual provides a complete companion documentation about the structure of the main program and subroutines, the preparation of input data, the interpretation of output results, access and use of the program, and the detailed description of all the analytic, logical expressions and flow charts used in computations and program structure. A numerical example is provided and solved completely to show the sequence of computations followed. The program is carefully structured to reduce both user's time and costs without sacrificing accuracy. The user would expect a cost of CPU time of approximately $5.00 per building zone excluding printing costs. The accuracy, on the other hand, measured by deviation of simulated consumption from watt-hour meter readings, was found by many simulation tests not to exceed + or - 10 percent margin.

  1. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  2. Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.

    PubMed

    Parr, Andreas; Miesen, Robert; Vossiek, Martin

    2016-06-25

    In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.

  3. Effects of soil and precipitation dataset resolution on SWAT2005 sediment and total phosphorus simulation accuracy and outputs

    USDA-ARS?s Scientific Manuscript database

    The Fort Cobb Reservoir, which is within the Fort Cobb Reservoir Experimental watershed (FCREW) in Oklahoma, is on the Oklahoma 303(d) list (list of water bodies that do not meet the water quality standards as given in the Clean Water Act) based on sedimentation and trophic level of the lake associa...

  4. Validating the southern variant forest vegetation simulator height predictions on southeastern hardwoods in Kentucky and Tennessee

    Treesearch

    Bernard R. Parresol; Steven C. Stedman

    2004-01-01

    The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...

  5. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  6. Influence of Forecast Accuracy of Photovoltaic Power Output on Facility Planning and Operation of Microgrid under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.

  7. Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2015-04-01

    Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.

  8. Research on the Dynamic Hysteresis Loop Model of the Residence Times Difference (RTD)-Fluxgate

    PubMed Central

    Wang, Yanzhang; Wu, Shujun; Zhou, Zhijian; Cheng, Defu; Pang, Na; Wan, Yunxia

    2013-01-01

    Based on the core hysteresis features, the RTD-fluxgate core, while working, is repeatedly saturated with excitation field. When the fluxgate simulates, the accurate characteristic model of the core may provide a precise simulation result. As the shape of the ideal hysteresis loop model is fixed, it cannot accurately reflect the actual dynamic changing rules of the hysteresis loop. In order to improve the fluxgate simulation accuracy, a dynamic hysteresis loop model containing the parameters which have actual physical meanings is proposed based on the changing rule of the permeability parameter when the fluxgate is working. Compared with the ideal hysteresis loop model, this model has considered the dynamic features of the hysteresis loop, which makes the simulation results closer to the actual output. In addition, other hysteresis loops of different magnetic materials can be explained utilizing the described model for an example of amorphous magnetic material in this manuscript. The model has been validated by the output response comparison between experiment results and fitting results using the model. PMID:24002230

  9. Nano-JASMINE: cosmic radiation degradation of CCD performance and centroid detection

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yukiyasu; Shimura, Yuki; Niwa, Yoshito; Yano, Taihei; Gouda, Naoteru; Yamada, Yoshiyuki

    2012-09-01

    Nano-JASMINE (NJ) is a very small astrometry satellite project led by the National Astronomical Observatory of Japan. The satellite is ready for launch, and the launch is currently scheduled for late 2013 or early 2014. The satellite is equipped with a fully depleted CCD and is expected to perform astrometry observations for stars brighter than 9 mag in the zw-band (0.6 µm-1.0 µm). Distances of stars located within 100 pc of the Sun can be determined by using annual parallax measurements. The targeted accuracy for the position determination of stars brighter than 7.5 mag is 3 mas, which is equivalent to measuring the positions of stars with an accuracy of less than one five-hundredth of the CCD pixel size. The position measurements of stars are performed by centroiding the stellar images taken by the CCD that operates in the time and delay integration mode. The degradation of charge transfer performance due to cosmic radiation damage in orbit is proved experimentally. A method is then required to compensate for the effects of performance degradation. One of the most effective ways of achieving this is to simulate observed stellar outputs, including the effect of CCD degradation, and then formulate our centroiding algorithm and evaluate the accuracies of the measurements. We report here the planned procedure to simulate the outputs of the NJ observations. We also developed a CCD performance-measuring system and present preliminary results obtained using the system.

  10. Poster — Thur Eve — 43: Monte Carlo Modeling of Flattening Filter Free Beams and Studies of Relative Output Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.

    2014-08-15

    Flattening filter free (FFF) beams have been adopted by many clinics and used for patient treatment. However, compared to the traditional flattened beams, we have limited knowledge of FFF beams. In this study, we successfully modeled the 6 MV FFF beam for Varian TrueBeam accelerator with the Monte Carlo (MC) method. Both the percentage depth dose and profiles match well to the Golden Beam Data (GBD) from Varian. MC simulations were then performed to predict the relative output factors. The in-water output ratio, Scp, was simulated in water phantom and data obtained agrees well with GBD. The in-air output ratio,more » Sc, was obtained by analyzing the phase space placed at isocenter, in air, and computing the ratio of water Kerma rates for different field sizes. The phantom scattering factor, Sp, can then be obtained from the traditional way of taking the ratio of Scp and Sc. We also simulated Sp using a recently proposed method based on only the primary beam dose delivery in water phantom. Because there is no concern of lateral electronic disequilibrium, this method is more suitable for small fields. The results from both methods agree well with each other. The flattened 6 MV beam was simulated and compared to 6 MV FFF. The comparison confirms that 6 MV FFF has less scattering from the Linac head and less phantom scattering contribution to the central axis dose, which will be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems.« less

  11. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  12. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    PubMed Central

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  13. Working characteristics of variable intake valve in compressed air engine.

    PubMed

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  14. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation

    NASA Astrophysics Data System (ADS)

    Huang, Aiping; Tao, Linwei; Niu, Yilong

    2018-04-01

    In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.

  15. Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster

    PubMed Central

    Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi

    2017-01-01

    Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972

  16. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooker, A.; Gonder, J.; Wang, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles tomore » provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).« less

  17. Use of regional climate model output for hydrologic simulations

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.

    2002-01-01

    Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  18. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  19. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities.

    PubMed

    Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano

    2008-09-01

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.

  20. Comprehensive model for predicting elemental composition of coal pyrolysis products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.

    Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less

  1. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less

  2. Real-time quality monitoring in debutanizer column with regression tree and ANFIS

    NASA Astrophysics Data System (ADS)

    Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar

    2018-05-01

    A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.

  3. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  4. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  5. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    PubMed

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  6. Desired Accuracy Estimation of Noise Function from ECG Signal by Fuzzy Approach

    PubMed Central

    Vahabi, Zahra; Kermani, Saeed

    2012-01-01

    Unknown noise and artifacts present in medical signals with non-linear fuzzy filter will be estimated and then removed. An adaptive neuro-fuzzy interference system which has a non-linear structure presented for the noise function prediction by before Samples. This paper is about a neuro-fuzzy method to estimate unknown noise of Electrocardiogram signal. Adaptive neural combined with Fuzzy System to construct a fuzzy Predictor. For this system setting parameters such as the number of Membership Functions for each input and output, training epochs, type of MFs for each input and output, learning algorithm and etc. is determined by learning data. At the end simulated experimental results are presented for proper validation. PMID:23717810

  7. Assessing the accuracy of MISR and MISR-simulated cloud top heights using CloudSat- and CALIPSO-retrieved hydrometeor profiles

    NASA Astrophysics Data System (ADS)

    Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.; Mace, Gerald G.; Benson, Sally

    2017-03-01

    Satellite retrievals of cloud properties are often used in the evaluation of global climate models, and in recent years satellite instrument simulators have been used to account for known retrieval biases in order to make more consistent comparisons between models and retrievals. Many of these simulators have seen little critical evaluation. Here we evaluate the Multiangle Imaging Spectroradiometer (MISR) simulator by using visible extinction profiles retrieved from a combination of CloudSat, CALIPSO, MODIS, and AMSR-E observations as inputs to the MISR simulator and comparing cloud top height statistics from the MISR simulator with those retrieved by MISR. Overall, we find that the occurrence of middle- and high-altitude topped clouds agrees well between MISR retrievals and the MISR-simulated output, with distributions of middle- and high-topped cloud cover typically agreeing to better than 5% in both zonal and regional averages. However, there are significant differences in the occurrence of low-topped clouds between MISR retrievals and MISR-simulated output that are due to differences in the detection of low-level clouds between MISR and the combined retrievals used to drive the MISR simulator, rather than due to errors in the MISR simulator cloud top height adjustment. This difference highlights the importance of sensor resolution and boundary layer cloud spatial structure in determining low-altitude cloud cover. The MISR-simulated and MISR-retrieved cloud optical depth also show systematic differences, which are also likely due in part to cloud spatial structure.

  8. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  9. On the application of blind source separation for damping estimation of bridges under traffic loading

    NASA Astrophysics Data System (ADS)

    Brewick, P. T.; Smyth, A. W.

    2014-12-01

    The accurate and reliable estimation of modal damping from output-only vibration measurements of structural systems is a continuing challenge in the fields of operational modal analysis (OMA) and system identification. In this paper a modified version of the blind source separation (BSS)-based Second-Order Blind Identification (SOBI) method was used to perform modal damping identification on a model bridge structure under varying loading conditions. The bridge model was created with finite elements and consisted of a series of stringer beams supported by a larger girder. The excitation was separated into two categories: ambient noise and traffic loads with noise modeled with random forcing vectors and traffic simulated with moving loads for cars and partially distributed moving masses for trains. The acceleration responses were treated as the mixed output signals for the BSS algorithm. The modified SOBI method used a windowing technique to maximize the amount of information used for blind identification from the responses. The modified SOBI method successfully found the mode shapes for both types of excitation with strong accuracy, but power spectral densities (PSDs) of the recovered modal responses showed signs of distortion for the traffic simulations. The distortion had an adverse affect on the damping ratio estimates for some of the modes but no correlation could be found between the accuracy of the damping estimates and the accuracy of the recovered mode shapes. The responses and their PSDs were compared to real-world collected data and patterns similar to distortion were observed implying that this issue likely affects real-world estimates.

  10. Influence of current input-output and age of first exposure on phonological acquisition in early bilingual Spanish-English-speaking kindergarteners.

    PubMed

    Ruiz-Felter, Roxanna; Cooperson, Solaman J; Bedore, Lisa M; Peña, Elizabeth D

    2016-07-01

    Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. To investigate the influence of age of first exposure to English and the amount of current input-output on phonological accuracy in English and Spanish in early bilingual Spanish-English kindergarteners. Also whether parent and teacher ratings of the children's intelligibility are correlated with phonological accuracy and the amount of experience with each language. Data for 91 kindergarteners (mean age = 5;6 years) were selected from a larger dataset focusing on Spanish-English bilingual language development. All children were from Central Texas, spoke a Mexican Spanish dialect and were learning American English. Children completed a single-word phonological assessment with separate forms for English and Spanish. The assessment was analyzed for segmental accuracy: percentage of consonants and vowels correct and percentage of early-, middle- and late-developing (EML) sounds correct were calculated. Children were more accurate on vowel production than consonant production and showed a decrease in accuracy from early to middle to late sounds. The amount of current input-output explained more of the variance in phonological accuracy than age of first English exposure. Although greater current input-output of a language was associated with greater accuracy in that language, English-dominant children were only significantly more accurate in English than Spanish on late sounds, whereas Spanish-dominant children were only significantly more accurate in Spanish than English on early sounds. Higher parent and teacher ratings of intelligibility in Spanish were correlated with greater consonant accuracy in Spanish, but the same did not hold for English. Higher intelligibility ratings in English were correlated with greater current English input-output, and the same held for Spanish. Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services. © 2016 Royal College of Speech and Language Therapists.

  11. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  12. Combination of electromagnetic measurements and FEM simulations for nondestructive determination of mechanical hardness

    NASA Astrophysics Data System (ADS)

    Gabi, Yasmine; Martins, Olivier; Wolter, Bernd; Strass, Benjamin

    2018-04-01

    The paper considers the Rockwell hardness investigation by finite element simulation in inspection situation of press hardened parts using the 3MA non-destructive testing system. The FEM model is based on robust strategy calculation which manages the issues of geometry and the time multiscale, as well as the local nonlinear hysteresis behavior of ferromagnetic materials. 3MA simulations are performed at high level operating point in order to saturate the soft microscopic surface soft layer of press hardened steel and access mainly to the bulk properties. 3MA measurements are validated by comparison with numerical simulations. Based on the simulation outputs, a virtual calibration is run. This result constitutes the first validation; the simulated calibration is in agreement with the conventional experimental data. As an outstanding highlight a correlation between magnetic quantities and hardness can be described via FEM simulated signals and shows high accuracy to the measured results.

  13. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  14. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    NASA Astrophysics Data System (ADS)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  15. A new method to extract modal parameters using output-only responses

    NASA Astrophysics Data System (ADS)

    Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo

    2005-04-01

    This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.

  16. Distortion and regulation characterization of a Mapham inverter

    NASA Technical Reports Server (NTRS)

    Sundberg, Richard C.; Brush, Andrew S.; Button, Robert M.; Patterson, Alexander G.

    1989-01-01

    Output-voltage total harmonic distortion (THD) of a 20-kHz, 6-kVA Mapham resonant inverter is characterized as a function of its switching-to-resonant frequency ratio, f(s)/f(r), using the EASY5 Engineering Analysis System. EASY5 circuit simulation results are compared with hardware test results to verify the accuracy of the simulations. The effects of load on the THD versus f(s)/f(r) is investigated for resistive, leading, and lagging power factor load impedances. The effect of the series output capacitor on the Mapham inverter output-voltage distortion and inherent load regulation is characterized under loads of various power factors and magnitudes. An optimum series capacitor value which improves the inherent load regulation to better than 3 percent is identified. The optimum series capacitor value is different from the value predicted from a modeled frequency domain analysis. An explanation is proposed which takes into account the conduction overlap in the inductor pairs during steady-state inverter operation, which decreases the effective inductance of a Mapham inverter. A fault protection and current limit method is discussed which allows the Mapham inverter to operate into a short circuit, even when the inverter resonant circuit becomes overdamped.

  17. Distortion and regulation characterization of a Mapham inverter

    NASA Technical Reports Server (NTRS)

    Sundberg, Richard C.; Brush, Andrew S.; Button, Robert M.; Patterson, Alexander G.

    1989-01-01

    Output voltage Total Harmonic Distortion (THD) of a 20kHz, 6kVA Mapham resonant inverter is characterized as a function of its switching-to-resonant frequency ratio, f sub s/f sub r, using the EASY5 engineering analysis system. EASY5 circuit simulation results are compared with hardware test results to verify the accuracy of the simulations. The effects of load on the THD versus f sub s/f sub r ratio is investigated for resistive, leading, and lagging power factor load impedances. The effect of the series output capacitor on the Mapham inverter output voltage distortion and inherent load regulation is characterized under loads of various power factors and magnitudes. An optimum series capacitor value which improves the inherent load regulation to better than 3 percent is identified. The optimum series capacitor value is different than the value predicted from a modeled frequency domain analysis. An explanation is proposed which takes into account the conduction overlap in the inductor pairs during steady-state inverter operation, which decreases the effective inductance of a Mapham inverter. A fault protection and current limit method is discussed which allows the Mapham inverter to operate into a short circuit, even when the inverter resonant circuit becomes overdamped.

  18. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  19. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  20. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  1. Research of converter transformer fault diagnosis based on improved PSO-BP algorithm

    NASA Astrophysics Data System (ADS)

    Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping

    2017-09-01

    To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.

  2. One-way coupling of an atmospheric and a hydrologic model in Colorado

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.

    2006-01-01

    This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.

  3. Uncertainty and feasibility of dynamical downscaling for modeling tropical cyclones for storm surge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Taraphdar, Sourav; Wang, Taiping

    This paper presents a modeling study conducted to evaluate the uncertainty of a regional model in simulating hurricane wind and pressure fields, and the feasibility of driving coastal storm surge simulation using an ensemble of region model outputs produced by 18 combinations of three convection schemes and six microphysics parameterizations, using Hurricane Katrina as a test case. Simulated wind and pressure fields were compared to observed H*Wind data for Hurricane Katrina and simulated storm surge was compared to observed high-water marks on the northern coast of the Gulf of Mexico. The ensemble modeling analysis demonstrated that the regional model wasmore » able to reproduce the characteristics of Hurricane Katrina with reasonable accuracy and can be used to drive the coastal ocean model for simulating coastal storm surge. Results indicated that the regional model is sensitive to both convection and microphysics parameterizations that simulate moist processes closely linked to the tropical cyclone dynamics that influence hurricane development and intensification. The Zhang and McFarlane (ZM) convection scheme and the Lim and Hong (WDM6) microphysics parameterization are the most skillful in simulating Hurricane Katrina maximum wind speed and central pressure, among the three convection and the six microphysics parameterizations. Error statistics of simulated maximum water levels were calculated for a baseline simulation with H*Wind forcing and the 18 ensemble simulations driven by the regional model outputs. The storm surge model produced the overall best results in simulating the maximum water levels using wind and pressure fields generated with the ZM convection scheme and the WDM6 microphysics parameterization.« less

  4. Method and apparatus for simulating atomospheric absorption of solar energy due to water vapor and CO.sub.2

    DOEpatents

    Sopori, Bhushan L.

    1995-01-01

    A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth's surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO.sub.2 and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO.sub.2 and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO.sub.2 and moisture.

  5. Method and apparatus for simulating atmospheric absorption of solar energy due to water vapor and CO{sub 2}

    DOEpatents

    Sopori, B.L.

    1995-06-20

    A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth`s surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO{sub 2} and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO{sub 2} and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO{sub 2} and moisture. 8 figs.

  6. Influence of Forecast Accuracy of Photovoltaic Power Output on Capacity Optimization of Microgrid Composition under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Sone, Akihito; Kato, Takeyoshi; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). If a number of MGs are controlled to maintain the predetermined electricity demand including RE-based DGs as negative demand, they would contribute to supply-demand balancing of whole electric power system. For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on a demonstrative study on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization. Three forecast cases with different accuracy are compared. The main results are as follows. Even with no forecast error during every 30 min. as the ideal forecast method, the required capacity of NaS battery reaches about 40% of PVS capacity for mitigating the instantaneous forecast error within 30 min. The required capacity to compensate for the forecast error is doubled with the actual forecast method. The influence of forecast error can be reduced by adjusting the scheduled power output of controllable DGs according to the weather forecast. Besides, the required capacity can be reduced significantly if the error of balancing control in a MG is acceptable for a few percentages of periods, because the total periods of large forecast error is not so often.

  7. Compact wavelength-insensitive fabrication-tolerant silicon-on-insulator beam splitter.

    PubMed

    Rasigade, Gilles; Le Roux, Xavier; Marris-Morini, Delphine; Cassan, Eric; Vivien, Laurent

    2010-11-01

    A star coupler-based beam splitter for rib waveguides is reported. A design method is presented and applied in the case of silicon-on-insulator rib waveguides. Experimental results are in good agreement with simulations. Excess loss lower than 1 dB is experimentally obtained for star coupler lengths from 0.5 to 1 μm. Output balance is better than 1 dB, which is the measurement accuracy, and broadband transmission is obtained over 90 nm.

  8. Small range logarithm calculation on Intel Quartus II Verilog

    NASA Astrophysics Data System (ADS)

    Mustapha, Muhazam; Mokhtar, Anis Shahida; Ahmad, Azfar Asyrafie

    2018-02-01

    Logarithm function is the inverse of exponential function. This paper implement power series of natural logarithm function using Verilog HDL in Quartus II. The mode of design used is RTL in order to decrease the number of megafunctions. The simulations were done to determine the precision and number of LEs used so that the output calculated accurately. It is found that the accuracy of the system only valid for the range of 1 to e.

  9. Assessing Uncertainty in Deep Learning Techniques that Identify Atmospheric Rivers in Climate Simulations

    NASA Astrophysics Data System (ADS)

    Mahesh, A.; Mudigonda, M.; Kim, S. K.; Kashinath, K.; Kahou, S.; Michalski, V.; Williams, D. N.; Liu, Y.; Prabhat, M.; Loring, B.; O'Brien, T. A.; Collins, W. D.

    2017-12-01

    Atmospheric rivers (ARs) can be the difference between CA facing drought or hurricane-level storms. ARs are a form of extreme weather defined as long, narrow columns of moisture which transport water vapor outside the tropics. When they make landfall, they release the vapor as rain or snow. Convolutional neural networks (CNNs), a machine learning technique that uses filters to recognize features, are the leading computer vision mechanism for classifying multichannel images. CNNs have been proven to be effective in identifying extreme weather events in climate simulation output (Liu et. al. 2016, ABDA'16, http://bit.ly/2hlrFNV). Here, we compare three different CNN architectures, tuned with different hyperparameters and training schemes. We compare two-layer, three-layer, four-layer, and sixteen-layer CNNs' ability to recognize ARs in Community Atmospheric Model version 5 output, and we explore the ability of data augmentation and pre-trained models to increase the accuracy of the classifier. Because pre-training the model with regular images (i.e. benches, stoves, and dogs) yielded the highest accuracy rate, this strategy, also known as transfer learning, may be vital in future scientific CNNs, which likely will not have access to a large labelled training dataset. By choosing the most effective CNN architecture, climate scientists can build an accurate historical database of ARs, which can be used to develop a predictive understanding of these phenomena.

  10. Experimental study of overland flow resistance coefficient model of grassland based on BP neural network

    NASA Astrophysics Data System (ADS)

    Jiao, Peng; Yang, Er; Ni, Yong Xin

    2018-06-01

    The overland flow resistance on grassland slope of 20° was studied by using simulated rainfall experiments. Model of overland flow resistance coefficient was established based on BP neural network. The input variations of model were rainfall intensity, flow velocity, water depth, and roughness of slope surface, and the output variations was overland flow resistance coefficient. Model was optimized by Genetic Algorithm. The results show that the model can be used to calculate overland flow resistance coefficient, and has high simulation accuracy. The average prediction error of the optimized model of test set is 8.02%, and the maximum prediction error was 18.34%.

  11. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  12. Spectrum synthesis for a spectrally tunable light source based on a DMD-convex grating Offner configuration

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Pan, Qiao; Shen, Weimin

    2016-09-01

    As one kind of light source simulation devices, spectrally tunable light sources are able to generate specific spectral shape and radiant intensity outputs according to different application requirements, which have urgent demands in many fields of the national economy and the national defense industry. Compared with the LED-type spectrally tunable light source, the one based on a DMD-convex grating Offner configuration has advantages of high spectral resolution, strong digital controllability, high spectrum synthesis accuracy, etc. As a key link of the above type light source to achieve target spectrum outputs, spectrum synthesis algorithm based on spectrum matching is therefore very important. An improved spectrum synthesis algorithm based on linear least square initialization and Levenberg-Marquardt iterative optimization is proposed in this paper on the basis of in-depth study of the spectrum matching principle. The effectiveness of the proposed method is verified by a series of simulations and experimental works.

  13. Analytic variance estimates of Swank and Fano factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less

  14. Use of medium-range numerical weather prediction model output to produce forecasts of streamflow

    USGS Publications Warehouse

    Clark, M.P.; Hay, L.E.

    2004-01-01

    This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases he accuracy of precipitation forecasts over the northeastern United States, but overall, the accuracy of MOS-based precipitation forecasts is slightly lower than the raw NCEP forecasts. Four basins in the United States were chosen as case studies to evaluate the value of MRF output for predictions of streamflow. Streamflow forecasts using MRF output were generated for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado: East Fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). Hydrologic model output forced with measured-station data were used as "truth" to focus attention on the hydrologic effects of errors in the MRF forecasts. Eight-day streamflow forecasts produced using the MOS-corrected MRF output as input (MOS) were compared with those produced using the climatic Ensemble Streamflow Prediction (ESP) technique. MOS-based streamflow forecasts showed increased skill in the snowmelt-dominated river basins, where daily variations in streamflow are strongly forced by temperature. In contrast, the skill of MOS forecasts in the rainfall-dominated basin (the Alapaha River) were equivalent to the skill of the ESP forecasts. Further improvements in streamflow forecasts require more accurate local-scale forecasts of precipitation and temperature, more accurate specification of basin initial conditions, and more accurate model simulations of streamflow. ?? 2004 American Meteorological Society.

  15. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  16. Design of a Programmable Star Tracker-Based Reference System for a Simulated Spacecraft

    DTIC Science & Technology

    2014-03-27

    This reduces the overall light intensity hitting the sensor, as indicated by the darker color. However, the red and green circles are also forming...may be beneficial on SimSat since we can control the light output depending on the source chosen. It is possible to sacrifice some star light intensity ...could be done to improve accuracy based on what could be controlled and changed easily. 3.2.3.1 Focal Length. The optics portion of the light collection

  17. Design and operational energy studies in a new high-rise office building. Volume 5. DOE-2: comparison with measured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    The purpose of the DOE-2 comparison study is to compare the energy consumption for the Park Plaza building predicted by the DOE-2.1 simulation with actual energy consumption measured by the data collection system at the building. The extensive and detailed output of the data collection system provides a basis for verifying the accuracy of DOE-2.1 at a component level over short intervals of time in each of the four seasons.

  18. Active Vibration damping of Smart composite beams based on system identification technique

    NASA Astrophysics Data System (ADS)

    Bendine, Kouider; Satla, Zouaoui; Boukhoulda, Farouk Benallel; Nouari, Mohammed

    2018-03-01

    In the present paper, the active vibration control of a composite beam using piezoelectric actuator is investigated. The space state equation is determined using system identification technique based on the structure input output response provided by ANSYS APDL finite element package. The Linear Quadratic (LQG) control law is designed and integrated into ANSYS APDL to perform closed loop simulations. Numerical examples for different types of excitation loads are presented to test the efficiency and the accuracy of the proposed model.

  19. Evaluation and Application of Gridded Snow Water Equivalent Products for Improving Snowmelt Flood Predictions in the Red River Basin of the North

    NASA Astrophysics Data System (ADS)

    Schroeder, R.; Jacobs, J. M.; Vuyovich, C.; Cho, E.; Tuttle, S. E.

    2017-12-01

    Each spring the Red River basin (RRB) of the North, located between the states of Minnesota and North Dakota and southern Manitoba, is vulnerable to dangerous spring snowmelt floods. Flat terrain, low permeability soils and a lack of satisfactory ground observations of snow pack conditions make accurate predictions of the onset and magnitude of major spring flood events in the RRB very challenging. This study investigated the potential benefit of using gridded snow water equivalent (SWE) products from passive microwave satellite missions and model output simulations to improve snowmelt flood predictions in the RRB using NOAA's operational Community Hydrologic Prediction System (CHPS). Level-3 satellite SWE products from AMSR-E, AMSR2 and SSM/I, as well as SWE computed from Level-2 brightness temperatures (Tb) measurements, including model output simulations of SWE from SNODAS and GlobSnow-2 were chosen to support the snowmelt modeling exercises. SWE observations were aggregated spatially (i.e. to the NOAA North Central River Forecast Center forecast basins) and temporally (i.e. by obtaining daily screened and weekly unscreened maximum SWE composites) to assess the value of daily satellite SWE observations relative to weekly maximums. Data screening methods removed the impacts of snow melt and cloud contamination on SWE and consisted of diurnal SWE differences and a temperature-insensitive polarization difference ratio, respectively. We examined the ability of the satellite and model output simulations to capture peak SWE and investigated temporal accuracies of screened and unscreened satellite and model output SWE. The resulting SWE observations were employed to update the SNOW-17 snow accumulation and ablation model of CHPS to assess the benefit of using temporally and spatially consistent SWE observations for snow melt predictions in two test basins in the RRB.

  20. Research of three level match method about semantic web service based on ontology

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Fang

    2011-10-01

    An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.

  1. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  2. Simulation study and experimental results for detection and classification of the transient capacitor inrush current using discrete wavelet transform and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Patcharoen, Theerasak; Yoomak, Suntiti; Ngaopitakkul, Atthapol; Pothisarn, Chaichan

    2018-04-01

    This paper describes the combination of discrete wavelet transforms (DWT) and artificial intelligence (AI), which are efficient techniques to identify the type of inrush current, analyze the origin and possible cause on the capacitor bank switching. The experiment setup used to verify the proposed techniques can be detected and classified the transient inrush current from normal capacitor rated current. The discrete wavelet transforms are used to detect and classify the inrush current. Then, output from wavelet is acted as input of fuzzy inference system for discriminating the type of switching transient inrush current. The proposed technique shows enhanced performance with a discrimination accuracy of 90.57%. Both simulation study and experimental results are quite satisfactory with providing the high accuracy and reliability which can be developed and implemented into a numerical overcurrent (50/51) and unbalanced current (60C) protection relay for an application of shunt capacitor bank protection in the future.

  3. The use of sequential indicator simulation to characterize geostatistical uncertainty; Yucca Mountain Site Characterization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less

  4. Solving Problems With SINDA/FLUINT

    NASA Technical Reports Server (NTRS)

    2002-01-01

    SINDA/FLUINT, the NASA standard software system for thermohydraulic analysis, provides computational simulation of interacting thermal and fluid effects in designs modeled as heat transfer and fluid flow networks. The product saves time and money by making the user's design process faster and easier, and allowing the user to gain a better understanding of complex systems. The code is completely extensible, allowing the user to choose the features, accuracy and approximation levels, and outputs. Users can also add their own customizations as needed to handle unique design tasks or to automate repetitive tasks. Applications for SINDA/FLUINT include the pharmaceutical, petrochemical, biomedical, electronics, and energy industries. The system has been used to simulate nuclear reactors, windshield wipers, and human windpipes. In the automotive industry, it simulates the transient liquid/vapor flows within air conditioning systems.

  5. Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary

    PubMed Central

    Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen

    2017-01-01

    Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115

  6. Cryogenic Pressure Calibrator for Wide Temperature Electronically Scanned (ESP) Pressure Modules

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.

    2001-01-01

    Electronically scanned pressure (ESP) modules have been developed that can operate in ambient and in cryogenic environments, particularly Langley's National Transonic Facility (NTF). Because they can operate directly in a cryogenic environment, their use eliminates many of the operational problems associated with using conventional modules at low temperatures. To ensure the accuracy of these new instruments, calibration was conducted in a laboratory simulating the environmental conditions of NTF. This paper discusses the calibration process by means of the simulation laboratory, the system inputs and outputs and the analysis of the calibration data. Calibration results of module M4, a wide temperature ESP module with 16 ports and a pressure range of +/- 4 psid are given.

  7. Experimental validation of Monte Carlo (MANTIS) simulated x-ray response of columnar CsI scintillator screens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freed, Melanie; Miller, Stuart; Tang, Katherine

    Purpose: MANTIS is a Monte Carlo code developed for the detailed simulation of columnar CsI scintillator screens in x-ray imaging systems. Validation of this code is needed to provide a reliable and valuable tool for system optimization and accurate reconstructions for a variety of x-ray applications. Whereas previous validation efforts have focused on matching of summary statistics, in this work the authors examine the complete point response function (PRF) of the detector system in addition to relative light output values. Methods: Relative light output values and high-resolution PRFs have been experimentally measured with a custom setup. A corresponding set ofmore » simulated light output values and PRFs have also been produced, where detailed knowledge of the experimental setup and CsI:Tl screen structures are accounted for in the simulations. Four different screens were investigated with different thicknesses, column tilt angles, and substrate types. A quantitative comparison between the experimental and simulated PRFs was performed for four different incidence angles (0 deg., 15 deg., 30 deg., and 45 deg.) and two different x-ray spectra (40 and 70 kVp). The figure of merit (FOM) used measures the normalized differences between the simulated and experimental data averaged over a region of interest. Results: Experimental relative light output values ranged from 1.456 to 1.650 and were in approximate agreement for aluminum substrates, but poor agreement for graphite substrates. The FOMs for all screen types, incidence angles, and energies ranged from 0.1929 to 0.4775. To put these FOMs in context, the same FOM was computed for 2D symmetric Gaussians fit to the same experimental data. These FOMs ranged from 0.2068 to 0.8029. Our analysis demonstrates that MANTIS reproduces experimental PRFs with higher accuracy than a symmetric 2D Gaussian fit to the experimental data in the majority of cases. Examination of the spatial distribution of differences between the PRFs shows that the main reason for errors between MANTIS and the experimental data is that MANTIS-generated PRFs are sharper than the experimental PRFs. Conclusions: The experimental validation of MANTIS performed in this study demonstrates that MANTIS is able to reliably predict experimental PRFs, especially for thinner screens, and can reproduce the highly asymmetric shape seen in the experimental data. As a result, optimizations and reconstructions carried out using MANTIS should yield results indicative of actual detector performance. Better characterization of screen properties is necessary to reconcile the simulated light output values with experimental data.« less

  8. Preliminary results on noncollocated torque control of space robot actuators

    NASA Technical Reports Server (NTRS)

    Tilley, Scott W.; Francis, Colin M.; Emerick, Ken; Hollars, Michael G.

    1989-01-01

    In the Space Station era, more operations will be performed robotically in space in the areas of servicing, assembly, and experiment tending among others. These robots may have various sets of requirements for accuracy, speed, and force generation, but there will be design constraints such as size, mass, and power dissipation limits. For actuation, a leading motor candidate is a dc brushless type, and there are numerous potential drive trains each with its own advantages and disadvantages. This experiment uses a harmonic drive and addresses some inherent limitations, namely its backdriveability and low frequency structural resonances. These effects are controlled and diminished by instrumenting the actuator system with a torque transducer on the output shaft. This noncollocated loop is closed to ensure that the commanded torque is accurately delivered to the manipulator link. The actuator system is modelled and its essential parameters identified. The nonlinear model for simulations will include inertias, gearing, stiction, flexibility, and the effects of output load variations. A linear model is extracted and used for designing the noncollocated torque and position feedback loops. These loops are simulated with the structural frequency encountered in the testbed system. Simulation results are given for various commands in position. The use of torque feedback is demonstrated to yield superior performance in settling time and positioning accuracy. An experimental setup being finished consists of a bench mounted motor and harmonic drive actuator system. A torque transducer and two position encoders, each with sufficient resolution and bandwidth, will provide sensory information. Parameters of the physical system are being identified and matched to analytical predictions. Initial feedback control laws will be incorporated in the bench test equipment and various experiments run to validate the designs. The status of these experiments is given.

  9. Adaptive optimal input design and parametric estimation of nonlinear dynamical systems: application to neuronal modeling.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2018-05-11

    Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. We herein introduce an adaptive framework in which optimal input design is integrated with Square root Cubature Kalman Filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 msec in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications. © 2018 IOP Publishing Ltd.

  10. Simulation of temperature field for temperature-controlled radio frequency ablation using a hyperbolic bioheat equation and temperature-varied voltage calibration: a liver-mimicking phantom study.

    PubMed

    Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng

    2015-12-21

    This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.

  11. Influence of the width and cross-sectional shape of major connectors of maxillary dentures on the accuracy of speech production.

    PubMed

    Wada, Junichiro; Hideshima, Masayuki; Inukai, Shusuke; Matsuura, Hiroshi; Wakabayashi, Noriyuki

    2014-01-01

    To investigate the effects of the width and cross-sectional shape of the major connectors of maxillary dentures located in the middle area of the palate on the accuracy of phonetic output of consonants using an originally developed speech recognition system. Nine adults (4 males and 5 females, aged 24-26 years) with sound dentition were recruited. The following six sounds were considered: [∫i], [t∫i], [ɾi], [ni], [çi], and [ki]. The experimental connectors were fabricated to simulate bars (narrow, 8-mm width) and plates (wide, 20-mm width). Two types of cross-sectional shapes in the sagittal plane were specified: flat and plump edge. The appearance ratio of phonetic segment labels was calculated with the speech recognition system to indicate the accuracy of phonetic output. Statistical analysis was conducted using one-way ANOVA and Tukey's test. The mean appearance ratio of correct labels (MARC) significantly decreased for [ni] with the plump edge (narrow connector) and for [ki] with both the flat and plump edge (wide connectors). For [çi], the MARCs tended to be lower with flat plates. There were no significant differences for the other consonants. The width and cross-sectional shape of the connectors had limited effects on the articulation of consonants at the palate. © 2015 S. Karger AG, Basel.

  12. A passive and active microwave-vector radiative transfer (PAM-VRT) model

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Min, Qilong

    2015-11-01

    A passive and active microwave vector radiative transfer (PAM-VRT) package has been developed. This fast and accurate forward microwave model, with flexible and versatile input and output components, self-consistently and realistically simulates measurements/radiation of passive and active microwave sensors. The core PAM-VRT, microwave radiative transfer model, consists of five modules: gas absorption (two line-by-line databases and four fast models); hydrometeor property of water droplets and ice (spherical and nonspherical) particles; surface emissivity (from Community Radiative Transfer Model (CRTM)); vector radiative transfer of successive order of scattering (VSOS); and passive and active microwave simulation. The PAM-VRT package has been validated against other existing models, demonstrating good accuracy. The PAM-VRT not only can be used to simulate or assimilate measurements of existing microwave sensors, but also can be used to simulate observation results at some new microwave sensors.

  13. Evaluation of statistically downscaled GCM output as input for hydrological and stream temperature simulation in the Apalachicola–Chattahoochee–Flint River Basin (1961–99)

    USGS Publications Warehouse

    Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven

    2014-01-01

    The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.

  14. Accuracy and calibration of integrated radiation output indicators in diagnostic radiology: A report of the AAPM Imaging Physics Committee Task Group 190

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Pei-Jan P., E-mail: Pei-Jan.Lin@vcuhealth.org; Schueler, Beth A.; Balter, Stephen

    2015-12-15

    Due to the proliferation of disciplines employing fluoroscopy as their primary imaging tool and the prolonged extensive use of fluoroscopy in interventional and cardiovascular angiography procedures, “dose-area-product” (DAP) meters were installed to monitor and record the radiation dose delivered to patients. In some cases, the radiation dose or the output value is calculated, rather than measured, using the pertinent radiological parameters and geometrical information. The AAPM Task Group 190 (TG-190) was established to evaluate the accuracy of the DAP meter in 2008. Since then, the term “DAP-meter” has been revised to air kerma-area product (KAP) meter. The charge of TGmore » 190 (Accuracy and Calibration of Integrated Radiation Output Indicators in Diagnostic Radiology) has also been realigned to investigate the “Accuracy and Calibration of Integrated Radiation Output Indicators” which is reflected in the title of the task group, to include situations where the KAP may be acquired with or without the presence of a physical “meter.” To accomplish this goal, validation test protocols were developed to compare the displayed radiation output value to an external measurement. These test protocols were applied to a number of clinical systems to collect information on the accuracy of dose display values in the field.« less

  15. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.

    PubMed

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-08-14

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  16. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms

    PubMed Central

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-01-01

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203

  17. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  18. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  19. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  20. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  1. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  2. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  3. Quaternion Based Thermal Condition Monitoring System

    NASA Astrophysics Data System (ADS)

    Wong, Wai Kit; Loo, Chu Kiong; Lim, Way Soong; Tan, Poi Ngee

    In this paper, we will propose a new and effective machine condition monitoring system using log-polar mapper, quaternion based thermal image correlator and max-product fuzzy neural network classifier. Two classification characteristics namely: peak to sidelobe ratio (PSR) and real to complex ratio of the discrete quaternion correlation output (p-value) are applied in the proposed machine condition monitoring system. Large PSR and p-value observe in a good match among correlation of the input thermal image with a particular reference image, while small PSR and p-value observe in a bad/not match among correlation of the input thermal image with a particular reference image. In simulation, we also discover that log-polar mapping actually help solving rotation and scaling invariant problems in quaternion based thermal image correlation. Beside that, log-polar mapping can have a two fold of data compression capability. Log-polar mapping can help smoother up the output correlation plane too, hence makes a better measurement way for PSR and p-values. Simulation results also show that the proposed system is an efficient machine condition monitoring system with accuracy more than 98%.

  4. Modeling of frequency agile devices: development of PKI neuromodeling library based on hierarchical network structure

    NASA Astrophysics Data System (ADS)

    Sanchez, P.; Hinojosa, J.; Ruiz, R.

    2005-06-01

    Recently, neuromodeling methods of microwave devices have been developed. These methods are suitable for the model generation of novel devices. They allow fast and accurate simulations and optimizations. However, the development of libraries makes these methods to be a formidable task, since they require massive input-output data provided by an electromagnetic simulator or measurements and repeated artificial neural network (ANN) training. This paper presents a strategy reducing the cost of library development with the advantages of the neuromodeling methods: high accuracy, large range of geometrical and material parameters and reduced CPU time. The library models are developed from a set of base prior knowledge input (PKI) models, which take into account the characteristics common to all the models in the library, and high-level ANNs which give the library model outputs from base PKI models. This technique is illustrated for a microwave multiconductor tunable phase shifter using anisotropic substrates. Closed-form relationships have been developed and are presented in this paper. The results show good agreement with the expected ones.

  5. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  6. Hybrid clustering based fuzzy structure for vibration control - Part 1: A novel algorithm for building neuro-fuzzy system

    NASA Astrophysics Data System (ADS)

    Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-01-01

    This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.

  7. DigR: a generic model and its open source simulation software to mimic three-dimensional root-system architecture diversity.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Griffon, Sébastien; Jourdan, Christophe

    2018-04-18

    Many studies exist in the literature dealing with mathematical representations of root systems, categorized, for example, as pure structure description, partial derivative equations or functional-structural plant models. However, in these studies, root architecture modelling has seldom been carried out at the organ level with the inclusion of environmental influences that can be integrated into a whole plant characterization. We have conducted a multidisciplinary study on root systems including field observations, architectural analysis, and formal and mathematical modelling. This integrative and coherent approach leads to a generic model (DigR) and its software simulator. Architecture analysis applied to root systems helps at root type classification and architectural unit design for each species. Roots belonging to a particular type share dynamic and morphological characteristics which consist of topological and geometric features. The DigR simulator is integrated into the Xplo environment, with a user interface to input parameter values and make output ready for dynamic 3-D visualization, statistical analysis and saving to standard formats. DigR is simulated in a quasi-parallel computing algorithm and may be used either as a standalone tool or integrated into other simulation platforms. The software is open-source and free to download at http://amapstudio.cirad.fr/soft/xplo/download. DigR is based on three key points: (1) a root-system architectural analysis, (2) root type classification and modelling and (3) a restricted set of 23 root type parameters with flexible values indexed in terms of root position. Genericity and botanical accuracy of the model is demonstrated for growth, branching, mortality and reiteration processes, and for different root architectures. Plugin examples demonstrate the model's versatility at simulating plastic responses to environmental constraints. Outputs of the model include diverse root system structures such as tap-root, fasciculate, tuberous, nodulated and clustered root systems. DigR is based on plant architecture analysis which leads to specific root type classification and organization that are directly linked to field measurements. The open source simulator of the model has been included within a friendly user environment. DigR accuracy and versatility are demonstrated for growth simulations of complex root systems for both annual and perennial plants.

  8. Capabilities and applications of the Program to Optimize Simulated Trajectories (POST). Program summary document

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Stevenson, R.

    1977-01-01

    The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.

  9. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  10. Next Processor Module: A Hardware Accelerator of UT699 LEON3-FT System for On-Board Computer Software Simulation

    NASA Astrophysics Data System (ADS)

    Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David

    2014-08-01

    On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.

  11. Inference of Surface Parameters from Near-Infrared Spectra of Crystalline H2O Ice with Neural Learning

    NASA Astrophysics Data System (ADS)

    Zhang, Lili; Merényi, Erzsébet; Grundy, William M.; Young, Eliot F.

    2010-07-01

    The near-infrared spectra of icy volatiles collected from planetary surfaces can be used to infer surface parameters, which in turn may depend on the recent geologic history. The high dimensionality and complexity of the spectral data, the subtle differences between the spectra, and the highly nonlinear interplay between surface parameters make it often difficult to accurately derive these surface parameters. We use a neural machine, with a Self-Organizing Map (SOM) as its hidden layer, to infer the latent physical parameters, temperature and grain size from near-infrared spectra of crystalline H2O ice. The output layer of the SOM-hybrid machine is customarily trained with only the output from the SOM winner. We show that this scheme prevents simultaneous achievement of high prediction accuracies for both parameters. We propose an innovative neural architecture we call Conjoined Twins that allows multiple (k) SOM winners to participate in the training of the output layer and in which the customization of k can be limited automatically to a small range. With this novel machine we achieve scientifically useful accuracies, 83.0 ± 2.7% and 100.0 ± 0.0%, for temperature and grain size, respectively, from simulated noiseless spectra. We also show that the performance of the neural model is robust under various noisy conditions. A primary application of this prediction capability is planned for spectra returned from the Pluto-Charon system by New Horizons.

  12. Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output

    DOE PAGES

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    2017-09-10

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  13. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

    PubMed Central

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B.; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W. PMID:28848380

  14. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.

    PubMed

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.

  15. Use of High-Resolution WRF Simulations to Forecast Lightning Threat

    NASA Technical Reports Server (NTRS)

    McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.

    2008-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.

  16. Prediction of Land use changes using CA in GIS Environment

    NASA Astrophysics Data System (ADS)

    Kiavarz Moghaddam, H.; Samadzadegan, F.

    2009-04-01

    Urban growth is a typical self-organized system that results from the interaction between three defined systems; developed urban system, natural non-urban system and planned urban system. Urban growth simulation for an artificial city is carried out first. It evaluates a number of urban sprawl parameters including the size and shape of neighborhood besides testing different types of constraints on urban growth simulation. The results indicate that circular-type neighborhood shows smoother but faster urban growth as compared to nine-cell Moore neighborhood. Cellular Automata is proved to be very efficient in simulating the urban growth simulation over time. The strength of this technology comes from the ability of urban modeler to implement the growth simulation model, evaluating the results and presenting the output simulation results in visual interpretable environment. Artificial city simulation model provides an excellent environment to test a number of simulation parameters such as neighborhood influence on growth results and constraints role in driving the urban growth .Also, CA rules definition is critical stage in simulating the urban growth pattern in a close manner to reality. CA urban growth simulation and prediction of Tehran over the last four decades succeeds to simulate specified tested growth years at a high accuracy level. Some real data layer have been used in the CA simulation training phase such as 1995 while others used for testing the prediction results such as 2002. Tuning the CA growth rules is important through comparing the simulated images with the real data to obtain feedback. An important notice is that CA rules need also to be modified over time to adapt to the urban growth pattern. The evaluation method used on region basis has its advantage in covering the spatial distribution component of the urban growth process. Next step includes running the developed CA simulation over classified raster data for three years in a developed ArcGIS extention. A set of crisp rules are defined and calibrated based on real urban growth pattern. Uncertainty analysis is performed to evaluate the accuracy of the simulated results as compared to the historical real data. Evaluation shows promising results represented by the high average accuracies achieved. The average accuracy for the predicted growth images 1964 and 2002 is over 80 %. Modifying CA growth rules over time to match the growth pattern changes is important to obtain accurate simulation. This modification is based on the urban growth relationship for Tehran over time as can be seen in the historical raster data. The feedback obtained from comparing the simulated and real data is crucial in identifying the optimal set of CA rules for reliable simulation and calibrating growth steps.

  17. Modifications to the accuracy assessment analysis routine SPATL to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    The SPATL is an analysis program in the Accuracy Assessment Software System which makes comparisons between ground truth information and dot labeling for an individual segment. In order to facilitate the aggregation cf this information, SPATL was modified to produce a disk output file containing the necessary information about each segment.

  18. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    PubMed

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  19. A ferrofluid-based neural network: design of an analogue associative memory

    NASA Astrophysics Data System (ADS)

    Palm, R.; Korenivski, V.

    2009-02-01

    We analyse an associative memory based on a ferrofluid, consisting of a system of magnetic nano-particles suspended in a carrier fluid of variable viscosity subject to patterns of magnetic fields from an array of input and output magnetic pads. The association relies on forming patterns in the ferrofluid during a training phase, in which the magnetic dipoles are free to move and rotate to minimize the total energy of the system. Once equilibrated in energy for a given input-output magnetic field pattern pair, the particles are fully or partially immobilized by cooling the carrier liquid. Thus produced particle distributions control the memory states, which are read out magnetically using spin-valve sensors incorporated into the output pads. The actual memory consists of spin distributions that are dynamic in nature, realized only in response to the input patterns that the system has been trained for. Two training algorithms for storing multiple patterns are investigated. Using Monte Carlo simulations of the physical system, we demonstrate that the device is capable of storing and recalling two sets of images, each with an accuracy approaching 100%.

  20. S-191 sensor performance evaluation

    NASA Technical Reports Server (NTRS)

    Hughes, C. L.

    1975-01-01

    A final analysis was performed on the Skylab S-191 spectrometer data received from missions SL-2, SL-3, and SL-4. The repeatability and accuracy of the S-191 spectroradiometric internal calibration was determined by correlation to the output obtained from well-defined external targets. These included targets on the moon and earth as well as deep space. In addition, the accuracy of the S-191 short wavelength autocalibration was flight checked by correlation of the earth resources experimental package S-191 outputs and the Backup Unit S-191 outputs after viewing selected targets on the moon.

  1. A random generation approach to pattern library creation for full chip lithographic simulation

    NASA Astrophysics Data System (ADS)

    Zou, Elain; Hong, Sid; Liu, Limei; Huang, Lucas; Yang, Legender; Kabeel, Aliaa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Du, Chunshan; Hu, Xinyi; Wan, Qijian; Zhang, Recoo

    2017-04-01

    As technology advances, the need for running lithographic (litho) checking for early detection of hotspots before tapeout has become essential. This process is important at all levels—from designing standard cells and small blocks to large intellectual property (IP) and full chip layouts. Litho simulation provides high accuracy for detecting printability issues due to problematic geometries, but it has the disadvantage of slow performance on large designs and blocks [1]. Foundries have found a good compromise solution for running litho simulation on full chips by filtering out potential candidate hotspot patterns using pattern matching (PM), and then performing simulation on the matched locations. The challenge has always been how to easily create a PM library of candidate patterns that provides both comprehensive coverage for litho problems and fast runtime performance. This paper presents a new strategy for generating candidate real design patterns through a random generation approach using a layout schema generator (LSG) utility. The output patterns from the LSG are simulated, and then classified by a scoring mechanism that categorizes patterns according to the severity of the hotspots, probability of their presence in the design, and the likelihood of the pattern causing a hotspot. The scoring output helps to filter out the yield problematic patterns that should be removed from any standard cell design, and also to define potential problematic patterns that must be simulated within a bigger context to decide whether or not they represent an actual hotspot. This flow is demonstrated on SMIC 14nm technology, creating a candidate hotspot pattern library that can be used in full chip simulation with very high coverage and robust performance.

  2. Finite element simulation of adaptive aerospace structures with SMA actuators

    NASA Astrophysics Data System (ADS)

    Frautschi, Jason; Seelecke, Stefan

    2003-07-01

    The particular demands of aerospace engineering have spawned many of the developments in the field of adaptive structures. Shape memory alloys are particularly attractive as actuators in these types of structures due to their large strains, high specific work output and potential for structural integration. However, the requisite extensive physical testing has slowed development of potential applications and highlighted the need for a simulation tool for feasibility studies. In this paper we present an implementation of an extended version of the M'ller-Achenbach SMA model into a commercial finite element code suitable for such studies. Interaction between the SMA model and the solution algorithm for the global FE equations is thoroughly investigated with respect to the effect of tolerances and time step size on convergence, computational cost and accuracy. Finally, a simulation of a SMA-actuated flexible trailing edge of an aircraft wing modeled with beam elements is presented.

  3. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  4. Simulation of Distributed PV Power Output in Oahu Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lave, Matthew Samuel

    2016-08-01

    Distributed solar photovoltaic (PV) power generation in Oahu has grown rapidly since 2008. For applications such as determining the value of energy storage, it is important to have PV power output timeseries. Since these timeseries of not typically measured, here we produce simulated distributed PV power output for Oahu. Simulated power output is based on (a) satellite-derived solar irradiance, (b) PV permit data by neighborhood, and (c) population data by census block. Permit and population data was used to model locations of distributed PV, and irradiance data was then used to simulate power output. PV power output simulations are presentedmore » by sub-neighborhood polygons, neighborhoods, and for the whole island of Oahu. Summary plots of annual PV energy and a sample week timeseries of power output are shown, and a the files containing the entire timeseries are described.« less

  5. The Phoretic Motion Experiment (PME) definition phase

    NASA Technical Reports Server (NTRS)

    Eaton, L. R.; Neste, S. L. (Editor)

    1982-01-01

    The aerosol generator and the charge flow devices (CFD) chamber which were designed for zero-gravity operation was analyzed. Characteristics of the CFD chamber and aerosol generator which would be useful for cloud physics experimentation in a one-g as well as a zero-g environment are documented. The Collision type of aerosol generator is addressed. Relationships among the various input and output parameters are derived and subsequently used to determine the requirements on the controls of the input parameters to assure a given error budget of an output parameter. The CFD chamber operation in a zero-g environment is assessed utilizing a computer simulation program. Low nuclei critical supersaturation and high experiment accuracies are emphasized which lead to droplet growth times extending into hundreds of seconds. The analysis was extended to assess the performance constraints of the CFD chamber in a one-g environment operating in the horizontal mode.

  6. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  7. The Impact of Pushed Output on Accuracy and Fluency of Iranian EFL Learners' Speaking

    ERIC Educational Resources Information Center

    Sadeghi Beniss, Aram Reza; Edalati Bazzaz, Vahid

    2014-01-01

    The current study attempted to establish baseline quantitative data on the impacts of pushed output on two components of speaking (i.e., accuracy and fluency). To achieve this purpose, 30 female EFL learners were selected from a whole population pool of 50 based on the standard test of IELTS interview and were randomly assigned into an…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  9. Development of a Bolometer Detector System for the NIST High Accuracy Infrared Spectrophotometer

    PubMed Central

    Zong, Y.; Datla, R. U.

    1998-01-01

    A bolometer detector system was developed for the high accuracy infrared spectrophotometer at the National Institute of Standards and Technology to provide maximum sensitivity, spatial uniformity, and linearity of response covering the entire infrared spectral range. The spatial response variation was measured to be within 0.1 %. The linearity of the detector output was measured over three decades of input power. After applying a simple correction procedure, the detector output was found to deviate less than 0.2 % from linear behavior over this range. The noise equivalent power (NEP) of the bolometer system was 6 × 10−12 W/Hz at the frequency of 80 Hz. The detector output 3 dB roll-off frequency was 200 Hz. The detector output was stable to within ± 0.05 % over a 15 min period. These results demonstrate that the bolometer detector system will serve as an excellent detector for the high accuracy infrared spectrophotometer. PMID:28009364

  10. EDIN0613P weight estimating program. [for launch vehicles

    NASA Technical Reports Server (NTRS)

    Hirsch, G. N.

    1976-01-01

    The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.

  11. Precipitation-runoff and streamflow-routing models for the Willamette River basin, Oregon

    USGS Publications Warehouse

    Laenen, Antonius; Risley, John C.

    1997-01-01

    With an input of current streamflow, precipitation, and air temperature data the combined runoff and routing models can provide current estimates of streamflow at almost 500 locations on the main stem and major tributaries of the Willamette River with a high degree of accuracy. Relative contributions of surface runoff, subsurface flow, and ground-water flow can be assessed for 1 to 10 HRU classes in each of 253 subbasins identified for precipitation-runoff modeling. Model outputs were used with a water-quality model to simulate the movement of dye in the Pudding River as an example

  12. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    PubMed

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  13. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  14. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin

    Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less

  16. Numerical investigation of the boundary layer separation in chemical oxygen iodine laser

    NASA Astrophysics Data System (ADS)

    Huai, Ying; Jia, Shuqin; Wu, Kenan; Jin, Yuqi; Sang, Fengting

    2017-11-01

    Large eddy simulation is carried out to model the flow process in a supersonic chemical oxygen iodine laser. Unlike the common approaches relying on the tensor representation theory only, the model in the present work is an explicit anisotropy-resolving algebraic Subgrid-scale scalar flux formulation. With an accuracy in capturing the unsteady flow behaviours in the laser. Boundary layer separation initiated by the adverse pressure gradient is identified using Large Eddy Simulation. To quantify the influences of flow boundary layer on the laser performance, the fluid computations coupled with a physical optics loaded cavity model is developed. It has been found that boundary layer separation has a profound effect on the laser outputs due to the introduced shock waves. The F factor of the output beam decreases to 10% of the original one when the boundary transit into turbulence for the setup depicted in the paper. Because the pressure is always greater on the downstream of the boundary layer, there will always be a tendency of boundary separation in the laser. The results inspire designs of the laser to apply positive/passive control methods avoiding the boundary layer perturbation.

  17. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    NASA Astrophysics Data System (ADS)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  18. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  19. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  20. Water planning in a mixed land use Mediterranean area: point-source abstraction and pollution scenarios by a numerical model of varying stream-aquifer regime.

    PubMed

    Du, Mingxuan; Fouché, Olivier; Zavattero, Elodie; Ma, Qiang; Delestre, Olivier; Gourbesville, Philippe

    2018-02-22

    Integrated hydrodynamic modelling is an efficient approach for making semi-quantitative scenarios reliable enough for groundwater management, provided that the numerical simulations are from a validated model. The model set-up, however, involves many inputs due to the complexity of both the hydrological system and the land use. The case study of a Mediterranean alluvial unconfined aquifer in the lower Var valley (Southern France) is useful to test a method to estimate lacking data on water abstraction by small farms in urban context. With this estimation of the undocumented pumping volumes, and after calibration of the exchange parameters of the stream-aquifer system with the help of a river model, the groundwater flow model shows a high goodness of fit with the measured potentiometric levels. The consistency between simulated results and real behaviour of the system, with regard to the observed effects of lowering weirs and previously published hydrochemistry data, confirms reliability of the groundwater flow model. On the other hand, accuracy of the transport model output may be influenced by many parameters, many of which are not derived from field measurements. In this case study, for which river-aquifer feeding is the main control, the partition coefficient between direct recharge and runoff does not show a significant effect on the transport model output, and therefore, uncertainty of the hydrological terms such as evapotranspiration and runoff is not a first-rank issue to the pollution propagation. The simulation of pollution scenarios with the model returns expected pessimistic outputs, with regard to hazard management. The model is now ready to be used in a decision support system by the local water supply managers.

  1. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  2. Growth and food consumption by tiger muskellunge: Effects of temperature and ration level on bioenergetic model predictions

    USGS Publications Warehouse

    Chipps, S.R.; Einfalt, L.M.; Wahl, David H.

    2000-01-01

    We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.

  3. Variable self-powered light detection CMOS chip with real-time adaptive tracking digital output based on a novel on-chip sensor.

    PubMed

    Wang, HongYi; Fan, Youyou; Lu, Zhijian; Luo, Tao; Fu, Houqiang; Song, Hongjiang; Zhao, Yuji; Christen, Jennifer Blain

    2017-10-02

    This paper provides a solution for a self-powered light direction detection with digitized output. Light direction sensors, energy harvesting photodiodes, real-time adaptive tracking digital output unit and other necessary circuits are integrated on a single chip based on a standard 0.18 µm CMOS process. Light direction sensors proposed have an accuracy of 1.8 degree over a 120 degree range. In order to improve the accuracy, a compensation circuit is presented for photodiodes' forward currents. The actual measurement precision of output is approximately 7 ENOB. Besides that, an adaptive under voltage protection circuit is designed for variable supply power which may undulate with temperature and process.

  4. Detecting exact breakpoints of deletions with diversity in hepatitis B viral genomic DNA from next-generation sequencing data.

    PubMed

    Cheng, Ji-Hong; Liu, Wen-Chun; Chang, Ting-Tsung; Hsieh, Sun-Yuan; Tseng, Vincent S

    2017-10-01

    Many studies have suggested that deletions of Hepatitis B Viral (HBV) are associated with the development of progressive liver diseases, even ultimately resulting in hepatocellular carcinoma (HCC). Among the methods for detecting deletions from next-generation sequencing (NGS) data, few methods considered the characteristics of virus, such as high evolution rates and high divergence among the different HBV genomes. Sequencing high divergence HBV genome sequences using the NGS technology outputs millions of reads. Thus, detecting exact breakpoints of deletions from these big and complex data incurs very high computational cost. We proposed a novel analytical method named VirDelect (Virus Deletion Detect), which uses split read alignment base to detect exact breakpoint and diversity variable to consider high divergence in single-end reads data, such that the computational cost can be reduced without losing accuracy. We use four simulated reads datasets and two real pair-end reads datasets of HBV genome sequence to verify VirDelect accuracy by score functions. The experimental results show that VirDelect outperforms the state-of-the-art method Pindel in terms of accuracy score for all simulated datasets and VirDelect had only two base errors even in real datasets. VirDelect is also shown to deliver high accuracy in analyzing the single-end read data as well as pair-end data. VirDelect can serve as an effective and efficient bioinformatics tool for physiologists with high accuracy and efficient performance and applicable to further analysis with characteristics similar to HBV on genome length and high divergence. The software program of VirDelect can be downloaded at https://sourceforge.net/projects/virdelect/. Copyright © 2017. Published by Elsevier Inc.

  5. A Low-Power Thermal-Based Sensor System for Low Air Flow Detection

    PubMed Central

    Arifuzzman, AKM; Haider, Mohammad Rafiqul; Allison, David B.

    2016-01-01

    Being able to rapidly detect a low air flow rate with high accuracy is essential for various applications in the automotive and biomedical industries. We have developed a thermal-based low air flow sensor with a low-power sensor readout for biomedical applications. The thermal-based air flow sensor comprises a heater and three pairs of temperature sensors that sense temperature differences due to laminar air flow. The thermal-based flow sensor was designed and simulated by using laminar flow, heat transfer in solids and fluids physics in COMSOL MultiPhysics software. The proposed sensor can detect air flow as low as 0.0064 m/sec. The readout circuit is based on a current- controlled ring oscillator in which the output frequency of the ring oscillator is proportional to the temperature differences of the sensors. The entire readout circuit was designed and simulated by using a 130-nm standard CMOS process. The sensor circuit features a small area and low-power consumption of about 22.6 µW with an 800 mV power supply. In the simulation, the output frequency of the ring oscillator and the change in thermistor resistance showed a high linearity with an R2 value of 0.9987. The low-power dissipation, high linearity and small dimensions of the proposed flow sensor and circuit make the system highly suitable for biomedical applications. PMID:28435186

  6. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  7. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  8. Using Delft3D to Simulate Current Energy Conversion

    NASA Astrophysics Data System (ADS)

    James, S. C.; Chartrand, C.; Roberts, J.

    2015-12-01

    As public concern with renewable energy increases, current energy conversion (CEC) technology is being developed to optimize energy output and minimize environmental impact. CEC turbines generate energy from tidal and current systems and create wakes that interact with turbines located downstream of a device. The placement of devices can greatly influence power generation and structural reliability. CECs can also alter the ecosystem process surrounding the turbines, such as flow regimes, sediment dynamics, and water quality. Software is needed to investigate specific CEC sites to simulate power generation and hydrodynamic responses of a flow through a CEC turbine array. This work validates Delft3D against several flume experiments by simulating the power generation and hydrodynamic response of flow through a turbine or actuator disc(s). Model parameters are then calibrated against these data sets to reproduce momentum removal and wake recovery data with 3-D flow simulations. Simulated wake profiles and turbulence intensities compare favorably to the experimental data and demonstrate the utility and accuracy of a fast-running tool for future siting and analysis of CEC arrays in complex domains.

  9. Initial attitude determination for the hipparcos satellite

    NASA Astrophysics Data System (ADS)

    Van der Ha, Jozef C.

    The present paper described the strategy and algorithms used during the initial on-ground three-axes attitude determination of ESA's astrometry satellite HIPPARCOS. The estimation is performed using calculated crossing times of identified stars over the Star Mapper's vertical and inclined slit systems as well as outputs from a set of rate-integrating gyros. Valid star transits in either of the two fields of view are expected to occur in average about every 30 s whereas the gyros are sampled at about 1 Hz. The state vector to be estimated consists of the three angles, three rates and three gyro drift rate components. Simulations have shown that convergence of the estimator is established within about 10 min and that the accuracies achieved are in the order of a few arcsec for the angles and a few milliarcsec per s for the rates. These stringent accuracies are in fact required for initialisation of subsequent autonomous on-board real-time attitude determination.

  10. Beamforming synthesis of binaural responses from computer simulations of acoustic spaces.

    PubMed

    Poletti, Mark A; Svensson, U Peter

    2008-07-01

    Auditorium designs can be evaluated prior to construction by numerical modeling of the design. High-accuracy numerical modeling produces the sound pressure on a rectangular grid, and subjective assessment of the design requires auralization of the sampled sound field at a desired listener position. This paper investigates the production of binaural outputs from the sound pressure at a selected number of grid points by using a least squares beam forming approach. Low-frequency axisymmetric emulations are derived by assuming a solid sphere model of the head, and a spherical array of 640 microphones is used to emulate ten measured head-related transfer function (HRTF) data sets from the CIPIC database for half the audio bandwidth. The spherical array can produce high-accuracy band-limited emulation of any human subject's measured HRTFs for a fixed listener position by using individual sets of beam forming impulse responses.

  11. Research on detecting spot selection and signal pretreatment of four-quadrant detector

    NASA Astrophysics Data System (ADS)

    Liu, Wenli; Han, Shaokun

    2018-01-01

    The four-quadrant detector is a photoelectric position sensor based on the photovoltaic effect. It is widely used in many fields such as target azimuth measurement, end-guided weapon and so on. The selection of the spot and the calculation of the center position are one of the main factors that affect the accuracy of the position measurement of the fourquadrant detector. In order to improve the positioning accuracy of the four-quadrant detector, the method of determining the best spot size is obtained from the theoretical research. The output signal of the four-quadrant detector is a weak narrow pulse signal, which needs to be magnified and widened at high magnitudes. The signal preprocessing method is simulated and experimentally studied. Detecting the spot and the signal processing is realized by the four-quadrant detector, which is important for the use of quadrant detectors for high-precision position measurements.

  12. Computer modeling of thermoelectric generator performance

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  13. Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles

    PubMed Central

    Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.

    2009-01-01

    The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382

  14. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data

    PubMed Central

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-01-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886

  15. Design, experiments and simulation of voltage transformers on the basis of a differential input D-dot sensor.

    PubMed

    Wang, Jingang; Gao, Can; Yang, Jie

    2014-07-17

    Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid.

  16. X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.

    2015-09-01

    Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.

  17. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    PubMed

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Sampling ARG of multiple populations under complex configurations of subdivision and admixture.

    PubMed

    Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi

    2016-04-01

    Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Numerical simulations to assess the tracer dilution method for measurement of landfill methane emissions.

    PubMed

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2016-10-01

    Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  1. Recall Latencies, Confidence, and Output Positions of True and False Memories: Implications for Recall and Metamemory Theories

    ERIC Educational Resources Information Center

    Jou, Jerwen

    2008-01-01

    Recall latency, recall accuracy rate, and recall confidence were examined in free recall as a function of recall output serial position using a modified Deese-Roediger-McDermott paradigm to test a strength-based theory against the dual-retrieval process theory of recall output sequence. The strength theory predicts the item output sequence to be…

  2. Animating climate model data

    NASA Astrophysics Data System (ADS)

    DaPonte, John S.; Sadowski, Thomas; Thomas, Paul

    2006-05-01

    This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.

  3. Uncorrelated measurements of the cosmic expansion history and dark energy from supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yun; Tegmark, Max; Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania 19104

    We present a method for measuring the cosmic expansion history H(z) in uncorrelated redshift bins, and apply it to current and simulated type Ia supernova data assuming spatial flatness. If the matter density parameter {omega}{sub m} can be accurately measured from other data, then the dark-energy density history X(z)={rho}{sub X}(z)/{rho}{sub X}(0) can trivially be derived from this expansion history H(z). In contrast to customary 'black box' parameter fitting, our method is transparent and easy to interpret: the measurement of H(z){sup -1} in a redshift bin is simply a linear combination of the measured comoving distances for supernovae in that bin,more » making it obvious how systematic errors propagate from input to output. We find the Riess et al. (2004) gold sample to be consistent with the vanilla concordance model where the dark energy is a cosmological constant. We compare two mission concepts for the NASA/DOE Joint Dark-Energy Mission (JDEM), the Joint Efficient Dark-energy Investigation (JEDI), and the Supernova Accelaration Probe (SNAP), using simulated data including the effect of weak lensing (based on numerical simulations) and a systematic bias from K corrections. Estimating H(z) in seven uncorrelated redshift bins, we find that both provide dramatic improvements over current data: JEDI can measure H(z) to about 10% accuracy and SNAP to 30%-40% accuracy.« less

  4. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  5. Reservoir property grids improve with geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, J.

    1993-09-01

    Visualization software, reservoir simulators and many other E and P software applications need reservoir property grids as input. Using geostatistics, as compared to other gridding methods, to produce these grids leads to the best output from the software programs. For the purpose stated herein, geostatistics is simply two types of gridding methods. Mathematically, these methods are based on minimizing or duplicating certain statistical properties of the input data. One geostatical method, called kriging, is used when the highest possible point-by-point accuracy is desired. The other method, called conditional simulation, is used when one wants statistics and texture of the resultingmore » grid to be the same as for the input data. In the following discussion, each method is explained, compared to other gridding methods, and illustrated through example applications. Proper use of geostatistical data in flow simulations, use of geostatistical data for history matching, and situations where geostatistics has no significant advantage over other methods, also will be covered.« less

  6. Effects of experimental protocol on global vegetation model accuracy: a comparison of simulated and observed vegetation patterns for Asia

    USGS Publications Warehouse

    Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.

    2009-01-01

    Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.

  7. Simulation of particle motion in a closed conduit validated against experimental data

    NASA Astrophysics Data System (ADS)

    Dolanský, Jindřich

    2015-05-01

    Motion of a number of spherical particles in a closed conduit is examined by means of both simulation and experiment. The bed of the conduit is covered by stationary spherical particles of the size of the moving particles. The flow is driven by experimentally measured velocity profiles which are inputs of the simulation. Altering input velocity profiles generates various trajectory patterns. The lattice Boltzmann method (LBM) based simulation is developed to study mutual interactions of the flow and the particles. The simulation enables to model both the particle motion and the fluid flow. The entropic LBM is employed to deal with the flow characterized by the high Reynolds number. The entropic modification of the LBM along with the enhanced refinement of the lattice grid yield an increase in demands on computational resources. Due to the inherently parallel nature of the LBM it can be handled by employing the Parallel Computing Toolbox (MATLAB) and other transformations enabling usage of the CUDA GPU computing technology. The trajectories of the particles determined within the LBM simulation are validated against data gained from the experiments. The compatibility of the simulation results with the outputs of experimental measurements is evaluated. The accuracy of the applied approach is assessed and stability and efficiency of the simulation is also considered.

  8. Parameter regionalisation methods for a semi-distributed rainfall-runoff model: application to a Northern Apennine region

    NASA Astrophysics Data System (ADS)

    Neri, Mattia; Toth, Elena

    2017-04-01

    The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.

  9. Refined beam measurements on the SNS H- injector

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Murray, S. N.; Pennisi, T. R.; Santana, M.; Stinson, C. M.; Stockli, M. P.

    2017-08-01

    The H- injector for the SNS RFQ accelerator consists of an RF-driven, Cs-enhanced H- ion source and a compact, two-lens electrostatic LEBT. The LEBT output and the RFQ input beam current are measured by deflecting the beam on to an annular plate at the RFQ entrance. Our method and procedure have recently been refined to improve the measurement reliability and accuracy. The new measurements suggest that earlier measurements tended to underestimate the currents by 0-2 mA, but essentially confirm H- beam currents of 50-60 mA being injected into the RFQ. Emittance measurements conducted on a test stand featuring essentially the same H- injector setup show that the normalized rms emittance with 0.5% threshold (99% inclusion of the total beam) is in a range of 0.25-0.4 mm.mrad for a 50-60 mA beam. The RFQ output current is monitored with a BCM toroid. Measurements as well as simulations with the PARMTEQ code indicate an underperforming transmission of the RFQ since around 2012.

  10. Nanotribological behavior analysis of graphene/metal nanocomposites via MD simulations: New concepts and underlying mechanisms

    NASA Astrophysics Data System (ADS)

    Montazeri, A.; Mobarghei, A.

    2018-04-01

    In this article, we report a series of MD-based nanoindentation tests aimed to examine the nanotribological characteristics of metal-based nanocomposites in the presence of graphene sheets. To evaluate the effects of graphene/matrix interactions on the results, nickel and copper are selected as metals having strong and weak interactions with graphene, respectively. Consequently, the influence of graphene layers sliding and their distance from the sample surface on the nanoindentation outputs is thoroughly examined. Additionally, the temperature dependence of the results is deeply investigated with emphasis on the underlying mechanisms. To verify the accuracy of nanoindentation outputs, results of this method are compared with the data obtained via the tensile test. It is concluded that the nanoindentation results are closer to the values obtained by means of experimental setups. Employing these numerical-based experiments enables us to perform parametric studies to find out the dominant factors affecting the nanotribological behavior of these nanocomposites at the atomic-scale.

  11. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  12. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A finite-element model for simulation of two-dimensional steady-state ground-water flow in confined aquifers

    USGS Publications Warehouse

    Kuniansky, E.L.

    1990-01-01

    A computer program based on the Galerkin finite-element method was developed to simulate two-dimensional steady-state ground-water flow in either isotropic or anisotropic confined aquifers. The program may also be used for unconfined aquifers of constant saturated thickness. Constant head, constant flux, and head-dependent flux boundary conditions can be specified in order to approximate a variety of natural conditions, such as a river or lake boundary, and pumping well. The computer program was developed for the preliminary simulation of ground-water flow in the Edwards-Trinity Regional aquifer system as part of the Regional Aquifer-Systems Analysis Program. Results of the program compare well to analytical solutions and simulations .from published finite-difference models. A concise discussion of the Galerkin method is presented along with a description of the program. Provided in the Supplemental Data section are a listing of the computer program, definitions of selected program variables, and several examples of data input and output used in verifying the accuracy of the program.

  14. On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic

    Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less

  15. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  16. A High-Spin Rate Measurement Method for Projectiles Using a Magnetoresistive Sensor Based on Time-Frequency Domain Analysis.

    PubMed

    Shang, Jianyu; Deng, Zhihong; Fu, Mengyin; Wang, Shunting

    2016-06-16

    Traditional artillery guidance can significantly improve the attack accuracy and overall combat efficiency of projectiles, which makes it more adaptable to the information warfare of the future. Obviously, the accurate measurement of artillery spin rate, which has long been regarded as a daunting task, is the basis of precise guidance and control. Magnetoresistive (MR) sensors can be applied to spin rate measurement, especially in the high-spin and high-g projectile launch environment. In this paper, based on the theory of a MR sensor measuring spin rate, the mathematical relationship model between the frequency of MR sensor output and projectile spin rate was established through a fundamental derivation. By analyzing the characteristics of MR sensor output whose frequency varies with time, this paper proposed the Chirp z-Transform (CZT) time-frequency (TF) domain analysis method based on the rolling window of a Blackman window function (BCZT) which can accurately extract the projectile spin rate. To put it into practice, BCZT was applied to measure the spin rate of 155 mm artillery projectile. After extracting the spin rate, the impact that launch rotational angular velocity and aspect angle have on the extraction accuracy of the spin rate was analyzed. Simulation results show that the BCZT TF domain analysis method can effectively and accurately measure the projectile spin rate, especially in a high-spin and high-g projectile launch environment.

  17. Subject order-independent group ICA (SOI-GICA) for functional MRI data analysis.

    PubMed

    Zhang, Han; Zuo, Xi-Nian; Ma, Shuang-Ye; Zang, Yu-Feng; Milham, Michael P; Zhu, Chao-Zhe

    2010-07-15

    Independent component analysis (ICA) is a data-driven approach to study functional magnetic resonance imaging (fMRI) data. Particularly, for group analysis on multiple subjects, temporally concatenation group ICA (TC-GICA) is intensively used. However, due to the usually limited computational capability, data reduction with principal component analysis (PCA: a standard preprocessing step of ICA decomposition) is difficult to achieve for a large dataset. To overcome this, TC-GICA employs multiple-stage PCA data reduction. Such multiple-stage PCA data reduction, however, leads to variable outputs due to different subject concatenation orders. Consequently, the ICA algorithm uses the variable multiple-stage PCA outputs and generates variable decompositions. In this study, a rigorous theoretical analysis was conducted to prove the existence of such variability. Simulated and real fMRI experiments were used to demonstrate the subject-order-induced variability of TC-GICA results using multiple PCA data reductions. To solve this problem, we propose a new subject order-independent group ICA (SOI-GICA). Both simulated and real fMRI data experiments demonstrated the high robustness and accuracy of the SOI-GICA results compared to those of traditional TC-GICA. Accordingly, we recommend SOI-GICA for group ICA-based fMRI studies, especially those with large data sets. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Method and apparatus for current-output peak detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Geronimo, Gianluigi

    2017-01-24

    A method and apparatus for a current-output peak detector. A current-output peak detector circuit is disclosed and works in two phases. The peak detector circuit includes switches to switch the peak detector circuit from the first phase to the second phase upon detection of the peak voltage of an input voltage signal. The peak detector generates a current output with a high degree of accuracy in the second phase.

  19. Design of current source for multi-frequency simultaneous electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Han, Bing; Xu, Yanbin; Dong, Feng

    2017-09-01

    Multi-frequency electrical impedance tomography has been evolving from the frequency-sweep approach to the multi-frequency simultaneous measurement technique which can reduce measuring time and will be increasingly attractive for time-varying biological applications. The accuracy and stability of the current source are the key factors determining the quality of the image reconstruction. This article presents a field programmable gate array-based current source for a multi-frequency simultaneous electrical impedance tomography system. A novel current source circuit was realized by combining the classic current mirror based on the feedback amplifier AD844 with a differential topology. The optimal phase offsets of harmonic sinusoids were obtained through the crest factor analysis. The output characteristics of this current source were evaluated by simulation and actual measurement. The results include the following: (1) the output impedance was compared with one of the Howland pump circuit in simulation, showing comparable performance at low frequencies. However, the proposed current source makes lower demands for resistor tolerance but performs even better at high frequencies. (2) The output impedance in actual measurement below 200 kHz is above 1.3 MΩ and can reach 250 KΩ up to 1 MHz. (3) An experiment based on a biological RC model has been implemented. The mean error for the demodulated impedance amplitude and phase are 0.192% and 0.139°, respectively. Therefore, the proposed current source is wideband, biocompatible, and high precision, which demonstrates great potential to work as a sub-system in the multi-frequency electrical impedance tomography system.

  20. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  1. Study of the Algorithm of Backtracking Decoupling and Adaptive Extended Kalman Filter Based on the Quaternion Expanded to the State Variable for Underwater Glider Navigation

    PubMed Central

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-01-01

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331

  2. Study of the algorithm of backtracking decoupling and adaptive extended Kalman filter based on the quaternion expanded to the state variable for underwater glider navigation.

    PubMed

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-12-03

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.

  3. Surrogate model approach for improving the performance of reactive transport simulations

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines (MARS) method provides the best trade-off between speed and accuracy. This proof-of-concept forms an essential step towards building an interactive visual analytics system to enable user-driven systematic creation of geochemical surrogate models. Such a system shall enable reactive transport simulations with unprecedented spatial and temporal detail to become possible. References: Kolditz, O., Görke, U.J., Shao, H. and Wang, W., 2012. Thermo-hydro-mechanical-chemical processes in porous media: benchmarks and examples (Vol. 86). Springer Science & Business Media.

  4. Technical Report: TG-142 compliant and comprehensive quality assurance tests for respiratory gating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Kyle; Rong, Yi, E-mail: yrong@ucdavis.edu

    2015-11-15

    Purpose: To develop and establish a comprehensive gating commissioning and quality assurance procedure in compliance with TG-142. Methods: Eight Varian TrueBeam Linacs were used for this study. Gating commissioning included an end-to-end test and baseline establishment. The end-to-end test was performed using a CIRS dynamic thoracic phantom with a moving cylinder inside the lung, which was used for carrying both optically simulated luminescence detectors (OSLDs) and Gafchromic EBT2 films while the target is moving, for a point dose check and 2D profile check. In addition, baselines were established for beam-on temporal delay and calibration of the surrogate, for both megavoltagemore » (MV) and kilovoltage (kV) beams. A motion simulation device (MotionSim) was used to provide periodic motion on a platform, in synchronizing with a surrogate motion. The overall accuracy and uncertainties were analyzed and compared. Results: The OSLD readings were within 5% compared to the planned dose (within measurement uncertainty) for both phase and amplitude gated deliveries. Film results showed less than 3% agreement to the predicted dose with a standard sinusoid motion. The gate-on temporal accuracy was averaged at 139 ± 10 ms for MV beams and 92 ± 11 ms for kV beams. The temporal delay of the surrogate motion depends on the motion speed and was averaged at 54.6 ± 3.1 ms for slow, 24.9 ± 2.9 ms for intermediate, and 23.0 ± 20.1 ms for fast speed. Conclusions: A comprehensive gating commissioning procedure was introduced for verifying the output accuracy and establishing the temporal accuracy baselines with respiratory gating. The baselines are needed for routine quality assurance tests, as suggested by TG-142.« less

  5. Technical Report: TG-142 compliant and comprehensive quality assurance tests for respiratory gating.

    PubMed

    Woods, Kyle; Rong, Yi

    2015-11-01

    To develop and establish a comprehensive gating commissioning and quality assurance procedure in compliance with TG-142. Eight Varian TrueBeam Linacs were used for this study. Gating commissioning included an end-to-end test and baseline establishment. The end-to-end test was performed using a CIRS dynamic thoracic phantom with a moving cylinder inside the lung, which was used for carrying both optically simulated luminescence detectors (OSLDs) and Gafchromic EBT2 films while the target is moving, for a point dose check and 2D profile check. In addition, baselines were established for beam-on temporal delay and calibration of the surrogate, for both megavoltage (MV) and kilovoltage (kV) beams. A motion simulation device (MotionSim) was used to provide periodic motion on a platform, in synchronizing with a surrogate motion. The overall accuracy and uncertainties were analyzed and compared. The OSLD readings were within 5% compared to the planned dose (within measurement uncertainty) for both phase and amplitude gated deliveries. Film results showed less than 3% agreement to the predicted dose with a standard sinusoid motion. The gate-on temporal accuracy was averaged at 139±10 ms for MV beams and 92±11 ms for kV beams. The temporal delay of the surrogate motion depends on the motion speed and was averaged at 54.6±3.1 ms for slow, 24.9±2.9 ms for intermediate, and 23.0±20.1 ms for fast speed. A comprehensive gating commissioning procedure was introduced for verifying the output accuracy and establishing the temporal accuracy baselines with respiratory gating. The baselines are needed for routine quality assurance tests, as suggested by TG-142.

  6. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    PubMed Central

    Stöckel, Andreas; Jenzen, Christoph; Thies, Michael; Rückert, Ulrich

    2017-01-01

    Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output. PMID:28878642

  7. Evaluation of nine popular de novo assemblers in microbial genome assembly.

    PubMed

    Forouzan, Esmaeil; Maleki, Masoumeh Sadat Mousavi; Karkhane, Ali Asghar; Yakhchali, Bagher

    2017-12-01

    Next generation sequencing (NGS) technologies are revolutionizing biology, with Illumina being the most popular NGS platform. Short read assembly is a critical part of most genome studies using NGS. Hence, in this study, the performance of nine well-known assemblers was evaluated in the assembly of seven different microbial genomes. Effect of different read coverage and k-mer parameters on the quality of the assembly were also evaluated on both simulated and actual read datasets. Our results show that the performance of assemblers on real and simulated datasets could be significantly different, mainly because of coverage bias. According to outputs on actual read datasets, for all studied read coverages (of 7×, 25× and 100×), SPAdes and IDBA-UD clearly outperformed other assemblers based on NGA50 and accuracy metrics. Velvet is the most conservative assembler with the lowest NGA50 and error rate. Copyright © 2017. Published by Elsevier B.V.

  8. The Spacelab IPS Star Simulator

    NASA Astrophysics Data System (ADS)

    Wessling, Francis C., III

    The cost of doing business in space is very high. If errors occur while in orbit the costs grow and desired scientific data may be corrupted or even lost. The Spacelab Instrument Pointing System (IPS) Star Simulator is a unique test bed that allows star trackers to interface with simulated stars in a laboratory before going into orbit. This hardware-in-the-loop testing of equipment on earth increases the probability of success while in space. The IPS Star Simulator provides three fields of view 2.55 x 2.55 deg each for input into star trackers. The fields of view are produced on three separate monitors. Each monitor has 4096 x 4096 addressable points and can display 50 stars (pixels) maximum at a given time. The pixel refresh rate is 1000 Hz. The spectral output is approximately 550 nm. The available relative visual magnitude range is two to eight visual magnitudes. The star size is less than 100 arcsec. The minimum star movement is less than 5 arcsec and the relative position accuracy is approximately 40 arcsec. The purpose of this paper is to describe the IPS Star Simulator design and to provide an operational scenario so others may gain from the approach and possible use of the system.

  9. The microcomputer scientific software series 4: testing prediction accuracy.

    Treesearch

    H. Michael Rauscher

    1986-01-01

    A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.

  10. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  11. Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio

    2016-02-01

    Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).

  12. Output power fluctuations due to different weights of macro particles used in particle-in-cell simulations of Cerenkov devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Rong; Li, Yongdong; Liu, Chunliang

    2016-07-15

    The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less

  13. Heat-load simulator for heat sink design

    NASA Technical Reports Server (NTRS)

    Dunleavy, A. M.; Vaughn, T. J.

    1968-01-01

    Heat-load simulator is fabricated from 1/4-inch aluminum plate with a contact surface equal in dimensions and configuration to those of the electronic installation. The method controls thermal output to simulate actual electronic component thermal output.

  14. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  15. Web-based emergency response exercise management systems and methods thereof

    DOEpatents

    Goforth, John W.; Mercer, Michael B.; Heath, Zach; Yang, Lynn I.

    2014-09-09

    According to one embodiment, a method for simulating portions of an emergency response exercise includes generating situational awareness outputs associated with a simulated emergency and sending the situational awareness outputs to a plurality of output devices. Also, the method includes outputting to a user device a plurality of decisions associated with the situational awareness outputs at a decision point, receiving a selection of one of the decisions from the user device, generating new situational awareness outputs based on the selected decision, and repeating the sending, outputting and receiving steps based on the new situational awareness outputs. Other methods, systems, and computer program products are included according to other embodiments of the invention.

  16. Heat simulation via Scilab programming

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul

    2014-07-01

    This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.

  17. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  18. Hardware based redundant multi-threading inside a GPU for improved reliability

    DOEpatents

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  19. OASIS - ORBIT ANALYSIS AND SIMULATION SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1994-01-01

    The Orbit Analysis and Simulation Software, OASIS, is a software system developed for covariance and simulation analyses of problems involving earth satellites, especially the Global Positioning System (GPS). It provides a flexible, versatile and efficient accuracy analysis tool for earth satellite navigation and GPS-based geodetic studies. To make future modifications and enhancements easy, the system is modular, with five major modules: PATH/VARY, REGRES, PMOD, FILTER/SMOOTHER, and OUTPUT PROCESSOR. PATH/VARY generates satellite trajectories. Among the factors taken into consideration are: 1) the gravitational effects of the planets, moon and sun; 2) space vehicle orientation and shapes; 3) solar pressure; 4) solar radiation reflected from the surface of the earth; 5) atmospheric drag; and 6) space vehicle gas leaks. The REGRES module reads the user's input, then determines if a measurement should be made based on geometry and time. PMOD modifies a previously generated REGRES file to facilitate various analysis needs. FILTER/SMOOTHER is especially suited to a multi-satellite precise orbit determination and geodetic-type problems. It can be used for any situation where parameters are simultaneously estimated from measurements and a priori information. Examples of nonspacecraft areas of potential application might be Very Long Baseline Interferometry (VLBI) geodesy and radio source catalogue studies. OUTPUT PROCESSOR translates covariance analysis results generated by FILTER/SMOOTHER into user-desired easy-to-read quantities, performs mapping of orbit covariances and simulated solutions, transforms results into different coordinate systems, and computes post-fit residuals. The OASIS program was developed in 1986. It is designed to be implemented on a DEC VAX 11/780 computer using VAX VMS 3.7 or higher. It can also be implemented on a Micro VAX II provided sufficient disk space is available.

  20. A fuzzy logic approach to modeling a vehicle crash test

    NASA Astrophysics Data System (ADS)

    Pawlus, Witold; Karimi, Hamid Reza; Robbersmyr, Kjell G.

    2013-03-01

    This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.

  1. Design, Experiments and Simulation of Voltage Transformers on the Basis of a Differential Input D-dot Sensor

    PubMed Central

    Wang, Jingang; Gao, Can; Yang, Jie

    2014-01-01

    Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid. PMID:25036333

  2. Neural network based adaptive output feedback control: Applications and improvements

    NASA Astrophysics Data System (ADS)

    Kutay, Ali Turker

    Application of recently developed neural network based adaptive output feedback controllers to a diverse range of problems both in simulations and experiments is investigated in this thesis. The purpose is to evaluate the theory behind the development of these controllers numerically and experimentally, identify the needs for further development in practical applications, and to conduct further research in directions that are identified to ultimately enhance applicability of adaptive controllers to real world problems. We mainly focus our attention on adaptive controllers that augment existing fixed gain controllers. A recently developed approach holds great potential for successful implementations on real world applications due to its applicability to systems with minimal information concerning the plant model and the existing controller. In this thesis the formulation is extended to the multi-input multi-output case for distributed control of interconnected systems and successfully tested on a formation flight wind tunnel experiment. The command hedging method is formulated for the approach to further broaden the class of systems it can address by including systems with input nonlinearities. Also a formulation is adopted that allows the approach to be applied to non-minimum phase systems for which non-minimum phase characteristics are modeled with sufficient accuracy and treated properly in the design of the existing controller. It is shown that the approach can also be applied to augment nonlinear controllers under certain conditions and an example is presented where the nonlinear guidance law of a spinning projectile is augmented. Simulation results on a high fidelity 6 degrees-of-freedom nonlinear simulation code are presented. The thesis also presents a preliminary adaptive controller design for closed loop flight control with active flow actuators. Behavior of such actuators in dynamic flight conditions is not known. To test the adaptive controller design in simulation, a fictitious actuator model is developed that fits experimentally observed characteristics of flow control actuators in static flight conditions as well as possible coupling effects between actuation, the dynamics of flow field, and the rigid body dynamics of the vehicle.

  3. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  4. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  5. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  6. Wang-Landau sampling: Saving CPU time

    NASA Astrophysics Data System (ADS)

    Ferreira, L. S.; Jorge, L. N.; Leão, S. A.; Caparica, A. A.

    2018-04-01

    In this work we propose an improvement to the Wang-Landau (WL) method that allows an economy in CPU time of about 60% leading to the same results with the same accuracy. We used the 2D Ising model to show that one can initiate all WL simulations using the outputs of an advanced WL level from a previous simulation. We showed that up to the seventh WL level (f6) the simulations are not biased yet and can proceed to any value that the simulation from the very beginning would reach. As a result the initial WL levels can be simulated just once. It was also observed that the saving in CPU time is larger for larger lattice sizes, exactly where the computational cost is considerable. We carried out high-resolution simulations beginning initially from the first WL level (f0) and another beginning from the eighth WL level (f7) using all the data at the end of the previous level and showed that the results for the critical temperature Tc and the critical static exponents β and γ coincide within the error bars. Finally we applied the same procedure to the 1/2-spin Baxter-Wu model and the economy in CPU time was of about 64%.

  7. The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences

    ERIC Educational Resources Information Center

    Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui

    2006-01-01

    Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…

  8. Parameterization of Forest Canopies with the PROSAIL Model

    NASA Astrophysics Data System (ADS)

    Austerberry, M. J.; Grigsby, S.; Ustin, S.

    2013-12-01

    Particularly in forested environments, arboreal characteristics such as Leaf Area Index (LAI) and Leaf Inclination Angle have a large impact on the spectral characteristics of reflected radiation. The reflected spectrum can be measured directly with satellites or airborne instruments, including the MASTER and AVIRIS instruments. This particular project dealt with spectral analysis of reflected light as measured by AVIRIS compared to tree measurements taken from the ground. Chemical properties of leaves including pigment concentrations and moisture levels were also measured. The leaf data was combined with the chemical properties of three separate trees, and served as input data for a sequence of simulations with the PROSAIL Model, a combination of PROSPECT and Scattering by Arbitrarily Inclined Leaves (SAIL) simulations. The output was a computed reflectivity spectrum, which corresponded to the spectra that were directly measured by AVIRIS for the three trees' exact locations within a 34-meter pixel resolution. The input data that produced the best-correlating spectral output was then cross-referenced with LAI values that had been obtained through two entirely separate methods, NDVI extraction and use of the Beer-Lambert law with airborne LiDAR. Examination with regressive techniques between the measured and modeled spectra then enabled a determination of the trees' probable structure and leaf parameters. Highly-correlated spectral output corresponded well to specific values of LAI and Leaf Inclination Angle. Interestingly, it appears that varying Leaf Angle Distribution has little or no noticeable effect on the PROSAIL model. Not only is the effectiveness and accuracy of the PROSAIL model evaluated, but this project is a precursor to direct measurement of vegetative indices exclusively from airborne or satellite observation.

  9. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  10. Accuracy of electronic implant torque controllers following time in clinical service.

    PubMed

    Mitrani, R; Nicholls, J I; Phillips, K M; Ma, T

    2001-01-01

    Tightening of the screws in implant-supported restorations has been reported to be problematic, in that if the applied torque is too low, screw loosening occurs. If the torque is too high, then screw fracture can take place. Thus, accuracy of the torque driver is of the utmost importance. This study evaluated 4 new electronic torque drivers (controls) and 10 test electronic torque drivers, which had been in clinical service for a minimum of 5 years. Torque values of the test drivers were measured and were compared with the control values using a 1-way analysis of variance. Torque delivery accuracy was measured using a technique that simulated the clinical situation. In vivo, the torque driver turns the screw until the selected tightening torque is reached. In this laboratory experiment, an implant, along with an attached abutment and abutment gold screw, was held firmly in a Tohnichi torque gauge. Calibration accuracy for the Tohnichi is +/- 3% of the scale value. During torque measurement, the gold screw turned a minimum of 180 degrees before contact was made between the screw and abutment. Three torque values (10, 20, and 32 N-cm) were evaluated, at both high- and low-speed settings. The recorded torque measurements indicated that the 10 test electronic torque drivers maintained a torque delivery accuracy equivalent to the 4 new (unused) units. Judging from the torque output values obtained from the 10 test units, the clinical use of the electronic torque driver suggests that accuracy did not change significantly over the 5-year period of clinical service.

  11. A High-Spin Rate Measurement Method for Projectiles Using a Magnetoresistive Sensor Based on Time-Frequency Domain Analysis

    PubMed Central

    Shang, Jianyu; Deng, Zhihong; Fu, Mengyin; Wang, Shunting

    2016-01-01

    Traditional artillery guidance can significantly improve the attack accuracy and overall combat efficiency of projectiles, which makes it more adaptable to the information warfare of the future. Obviously, the accurate measurement of artillery spin rate, which has long been regarded as a daunting task, is the basis of precise guidance and control. Magnetoresistive (MR) sensors can be applied to spin rate measurement, especially in the high-spin and high-g projectile launch environment. In this paper, based on the theory of a MR sensor measuring spin rate, the mathematical relationship model between the frequency of MR sensor output and projectile spin rate was established through a fundamental derivation. By analyzing the characteristics of MR sensor output whose frequency varies with time, this paper proposed the Chirp z-Transform (CZT) time-frequency (TF) domain analysis method based on the rolling window of a Blackman window function (BCZT) which can accurately extract the projectile spin rate. To put it into practice, BCZT was applied to measure the spin rate of 155 mm artillery projectile. After extracting the spin rate, the impact that launch rotational angular velocity and aspect angle have on the extraction accuracy of the spin rate was analyzed. Simulation results show that the BCZT TF domain analysis method can effectively and accurately measure the projectile spin rate, especially in a high-spin and high-g projectile launch environment. PMID:27322266

  12. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  13. How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!

    NASA Astrophysics Data System (ADS)

    Hassan Saddagh, Mohammad; Javad Abedini, Mohammad

    2010-05-01

    Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.

  14. Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning

    NASA Astrophysics Data System (ADS)

    Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.

    2017-12-01

    Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.

  15. Uncertainty based modeling of rainfall-runoff: Combined differential evolution adaptive Metropolis (DREAM) and K-means clustering

    NASA Astrophysics Data System (ADS)

    Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara

    2015-09-01

    Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.

  16. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  17. An Implanted, Stimulated Muscle Powered Piezoelectric Generator

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Gustafson, Kenneth; Kilgore, Kevin

    2007-01-01

    A totally implantable piezoelectric generator system able to harness power from electrically activated muscle could be used to augment the power systems of implanted medical devices, such as neural prostheses, by reducing the number of battery replacement surgeries or by allowing periods of untethered functionality. The features of our generator design are no moving parts and the use of a portion of the generated power for system operation and regulation. A software model of the system has been developed and simulations have been performed to predict the output power as the system parameters were varied within their constraints. Mechanical forces that mimic muscle forces have been experimentally applied to a piezoelectric generator to verify the accuracy of the simulations and to explore losses due to mechanical coupling. Depending on the selection of system parameters, software simulations predict that this generator concept can generate up to approximately 700 W of power, which is greater than the power necessary to drive the generator, conservatively estimated to be 50 W. These results suggest that this concept has the potential to be an implantable, self-replenishing power source and further investigation is underway.

  18. Addressing uncertainty in atomistic machine learning.

    PubMed

    Peterson, Andrew A; Christensen, Rune; Khorshidi, Alireza

    2017-05-10

    Machine-learning regression has been demonstrated to precisely emulate the potential energy and forces that are output from more expensive electronic-structure calculations. However, to predict new regions of the potential energy surface, an assessment must be made of the credibility of the predictions. In this perspective, we address the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlight how uncertainty analysis can be used to assess the validity of machine-learning predictions. We suggest this will allow researchers to more fully use machine learning for the routine acceleration of large, high-accuracy, or extended-time simulations. In our demonstrations, we use a bootstrap ensemble of neural network-based calculators, and show that the width of the ensemble can provide an estimate of the uncertainty when the width is comparable to that in the training data. Intriguingly, we also show that the uncertainty can be localized to specific atoms in the simulation, which may offer hints for the generation of training data to strategically improve the machine-learned representation.

  19. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  1. Application of State Quantization-Based Methods in HEP Particle Transport Simulation

    NASA Astrophysics Data System (ADS)

    Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo

    2017-10-01

    Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.

  2. Identification of the Initial Transient in Discrete-Event Simulation Output Using the Kalman Filter

    DTIC Science & Technology

    1992-12-01

    output vector is obtained from each simulation observation. For example, consider a simulation of a medical clinic that has three types of patients and...determined. The variance of the output P- is derived using Equations (34), (46) and (47): ynt= AVv +,Zn = ± + Hx(tn) + v(t.) (53) = / + ý(t.) + V(tn...residuals fail the hypothesis test approximately 100cl percent of the trials . However, during the transient phase, the data’s relationship in time should be

  3. The Type-2 Fuzzy Logic Controller-Based Maximum Power Point Tracking Algorithm and the Quadratic Boost Converter for Pv System

    NASA Astrophysics Data System (ADS)

    Altin, Necmi

    2018-05-01

    An interval type-2 fuzzy logic controller-based maximum power point tracking algorithm and direct current-direct current (DC-DC) converter topology are proposed for photovoltaic (PV) systems. The proposed maximum power point tracking algorithm is designed based on an interval type-2 fuzzy logic controller that has an ability to handle uncertainties. The change in PV power and the change in PV voltage are determined as inputs of the proposed controller, while the change in duty cycle is determined as the output of the controller. Seven interval type-2 fuzzy sets are determined and used as membership functions for input and output variables. The quadratic boost converter provides high voltage step-up ability without any reduction in performance and stability of the system. The performance of the proposed system is validated through MATLAB/Simulink simulations. It is seen that the proposed system provides high maximum power point tracking speed and accuracy even for fast changing atmospheric conditions and high voltage step-up requirements.

  4. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  5. A T-Type Capacitive Sensor Capable of Measuring 5-DOF Error Motions of Precision Spindles

    PubMed Central

    Xiang, Kui; Qiu, Rongbo; Mei, Deqing; Chen, Zichen

    2017-01-01

    The precision spindle is a core component of high-precision machine tools, and the accurate measurement of its error motions is important for improving its rotation accuracy as well as the work performance of the machine. This paper presents a T-type capacitive sensor (T-type CS) with an integrated structure. The proposed sensor can measure the 5-degree-of-freedom (5-DOF) error motions of a spindle in-situ and simultaneously by integrating electrode groups in the cylindrical bore of the stator and the outer end face of its flange, respectively. Simulation analysis and experimental results show that the sensing electrode groups with differential measurement configuration have near-linear output for the different types of rotor displacements. What’s more, the additional capacitance generated by fringe effects has been reduced about 90% with the sensing electrode groups fabricated based on flexible printed circuit board (FPCB) and related processing technologies. The improved signal processing circuit has also been increased one times in the measuring performance and makes the measured differential output capacitance up to 93% of the theoretical values. PMID:28846631

  6. Structure-based capacitance modeling and power loss analysis for the latest high-performance slant field-plate trench MOSFET

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro

    2018-04-01

    Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.

  7. MODFLOW–LGR—Documentation of ghost node local grid refinement (LGR2) for multiple areas and the boundary flow and head (BFH2) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2013-01-01

    This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.

  8. MODFLOW-2005, the U.S. Geological Survey modular ground-water model - documentation of shared node local grid refinement (LGR) and the boundary flow and head (BFH) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2006-01-01

    This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.

  9. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  10. Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)

    DOE Data Explorer

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.

    2015-01-01

    These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.

  11. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    NASA Astrophysics Data System (ADS)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  12. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  13. The Spacelab IPS Star Simulator

    NASA Astrophysics Data System (ADS)

    Wessling, Francis C., III

    The cost of doing business in space is very high. If errors occur while in orbit the costs grow and desired scientific data may be corrupted or even lost. The Spacelab Instrument Pointing System (IPS) Star Simulator is a unique test bed that allows star trackers to interface with simulated stars in a laboratory before going into orbit. This hardware-in-the loop testing of equipment on earth increases the probability of success while in space. The IPS Star Simulator provides three fields of view 2.55 x 2.55 degrees each for input into star trackers. The fields of view are produced on three separate monitors. Each monitor has 4096 x 4096 addressable points and can display 50 stars (pixels) maximum at a given time. The pixel refresh rate is 1000 Hz. The spectral output is approximately 550 nm. The available relative visual magnitude range is 2 to 8 visual magnitudes. The star size is less than 100 arc seconds. The minimum star movement is less than 5 arc seconds and the relative position accuracy is approximately 40 arc seconds. The purpose of this paper is to describe the LPS Star Simulator design and to provide an operational scenario so others may gain from the approach and possible use of the system.

  14. Modeling the response of a monopulse radar to impulsive jamming signals using the Block Oriented System Simulator (BOSS)

    NASA Astrophysics Data System (ADS)

    Long, Jeffrey K.

    1989-09-01

    This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.

  15. Simulation Test System of Non-Contact D-dot Voltage Transformer

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Wang, Jingang; Luo, Ruixi; Gao, Can; Songnong, Li; Kongjun, Zhou

    2016-04-01

    The development trend of future voltage transformer in smart grid is non-contact measurement, miniaturization and intellectualization. This paper proposes one simulation test system of non-contact D-dot transformer for voltage measurement. This simulation test system consists of D-dot transformer, signal processing circuit and ground PC port. D-dot transformer realizes the indirect voltage measurement by measuring the change rate of electric displacement vector, a non-contact means (He et al. 2004, Principles and experiments of voltage transformer based on self-integrating D-dot probe. Proc CSEE 2014;15:2445-51). Specific to the characteristics of D-dot transformer signals, signal processing circuits with strong resistance to interference and distortion-free amplified sensor output signal are designed. WIFI wireless network is used to transmit the voltage detection to LabVIEW-based ground collection port and LabVIEW technology is adopted for signal reception, data processing and analysis and other functions. Finally, a test platform is established to simulate the performance of the whole test system of single-phase voltage transformer. Test results indicate that this voltage transformer has sound real-time performance, high accuracy and fast response speed and the simulation test system is stable and reliable and can be a new prototype of voltage transformers.

  16. Accuracy of Consonant-Vowel Syllables in Young Cochlear Implant Recipients and Hearing Children in the Single-Word Period

    ERIC Educational Resources Information Center

    Warner-Czyz, Andrea D.; Davis, Barbara L.; MacNeilage, Peter F.

    2010-01-01

    Purpose: Attaining speech accuracy requires that children perceive and attach meanings to vocal output on the basis of production system capacities. Because auditory perception underlies speech accuracy, profiles for children with hearing loss (HL) differ from those of children with normal hearing (NH). Method: To understand the impact of auditory…

  17. Airborne antenna radiation pattern code user's manual

    NASA Technical Reports Server (NTRS)

    Burnside, Walter D.; Kim, Jacob J.; Grandchamp, Brett; Rojas, Roberto G.; Law, Philip

    1985-01-01

    The use of a newly developed computer code to analyze the radiation patterns of antennas mounted on a ellipsoid and in the presence of a set of finite flat plates is described. It is shown how the code allows the user to simulate a wide variety of complex electromagnetic radiation problems using the ellipsoid/plates model. The code has the capacity of calculating radiation patterns around an arbitrary conical cut specified by the user. The organization of the code, definition of input and output data, and numerous practical examples are also presented. The analysis is based on the Uniform Geometrical Theory of Diffraction (UTD), and most of the computed patterns are compared with experimental results to show the accuracy of this solution.

  18. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2008-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  19. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2010-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  20. Radio/FADS/IMU integrated navigation for Mars entry

    NASA Astrophysics Data System (ADS)

    Jiang, Xiuqiang; Li, Shuang; Huang, Xiangyu

    2018-03-01

    Supposing future orbiting and landing collaborative exploration mission as the potential project background, this paper addresses the issue of Mars entry integrated navigation using radio beacon, flush air data sensing system (FADS), and inertial measurement unit (IMU). The range and Doppler information sensed from an orbiting radio beacon, the dynamic pressure and heating data sensed from flush air data sensing system, and acceleration and attitude angular rate outputs from an inertial measurement unit are integrated in an unscented Kalman filter to perform state estimation and suppress the system and measurement noise. Computer simulations show that the proposed integrated navigation scheme can enhance the navigation accuracy, which enables precise entry guidance for the given Mars orbiting and landing collaborative exploration mission.

  1. Operation quality assessment model for video conference system

    NASA Astrophysics Data System (ADS)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  2. Implementation and simulation of a cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Wang, Huaming; Zhu, Jianying

    2008-11-01

    The purpose is to investigate the performance of cone dielectric elastomer actuator (DEA) by experiment and FEM simulation. Two working equilibrium positions of cone DEA, which correspond to its initial displacement and displacement output with voltage off and on respectively, are determined through the analysis on its working principle. Experiments show that analytical results accord with experimental ones, and work output in a workcycle is hereby calculated. Actuator can respond quickly when voltage is applied and can return to its original position rapidly when voltage is released. Also, FEM simulation is used to obtain the movement of cone DEA in advance. Simulation results agree well with experimental ones and prove the feasibility of simulation. Also, causes for small difference between them in displacement output are analyzed.

  3. Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  4. LANL - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  5. LANL - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  6. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as significantly as expected.

  7. Spatiotemporal Beamforming: A Transparent and Unified Decoding Approach to Synchronous Visual Brain-Computer Interfacing.

    PubMed

    Wittevrongel, Benjamin; Van Hulle, Marc M

    2017-01-01

    Brain-Computer Interfaces (BCIs) decode brain activity with the aim to establish a direct communication channel with an external device. Albeit they have been hailed to (re-)establish communication in persons suffering from severe motor- and/or communication disabilities, only recently BCI applications have been challenging other assistive technologies. Owing to their considerably increased performance and the advent of affordable technological solutions, BCI technology is expected to trigger a paradigm shift not only in assistive technology but also in the way we will interface with technology. However, the flipside of the quest for accuracy and speed is most evident in EEG-based visual BCI where it has led to a gamut of increasingly complex classifiers, tailored to the needs of specific stimulation paradigms and use contexts. In this contribution, we argue that spatiotemporal beamforming can serve several synchronous visual BCI paradigms. We demonstrate this for three popular visual paradigms even without attempting to optimizing their electrode sets. For each selectable target, a spatiotemporal beamformer is applied to assess whether the corresponding signal-of-interest is present in the preprocessed multichannel EEG signals. The target with the highest beamformer output is then selected by the decoder (maximum selection). In addition to this simple selection rule, we also investigated whether interactions between beamformer outputs could be employed to increase accuracy by combining the outputs for all targets into a feature vector and applying three common classification algorithms. The results show that the accuracy of spatiotemporal beamforming with maximum selection is at par with that of the classification algorithms and interactions between beamformer outputs do not further improve that accuracy.

  8. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  9. Optimization of output power and transmission efficiency of magnetically coupled resonance wireless power transfer system

    NASA Astrophysics Data System (ADS)

    Yan, Rongge; Guo, Xiaoting; Cao, Shaoqing; Zhang, Changgeng

    2018-05-01

    Magnetically coupled resonance (MCR) wireless power transfer (WPT) system is a promising technology in electric energy transmission. But, if its system parameters are designed unreasonably, output power and transmission efficiency will be low. Therefore, optimized parameters design of MCR WPT has important research value. In the MCR WPT system with designated coil structure, the main parameters affecting output power and transmission efficiency are the distance between the coils, the resonance frequency and the resistance of the load. Based on the established mathematical model and the differential evolution algorithm, the change of output power and transmission efficiency with parameters can be simulated. From the simulation results, it can be seen that output power and transmission efficiency of the two-coil MCR WPT system and four-coil one with designated coil structure are improved. The simulation results confirm the validity of the optimization method for MCR WPT system with designated coil structure.

  10. Theoretical modeling, simulation and experimental study of hybrid piezoelectric and electromagnetic energy harvester

    NASA Astrophysics Data System (ADS)

    Li, Ping; Gao, Shiqiao; Cong, Binglong

    2018-03-01

    In this paper, performances of vibration energy harvester combined piezoelectric (PE) and electromagnetic (EM) mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it's found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.

  11. Design calculations for NIF convergent ablator experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, Debra; Leeper, Ramon Joe; Spears, B. K.

    2010-11-01

    Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics: (1) Dante measurements of hohlraum x-ray flux and spectrum, (2) streaked radiographs of the imploding ablator shell, (3) wedge range filter measurements of D-He3 proton output spectra, and (4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to providemore » an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning.« less

  12. Stabilization Approaches for Linear and Nonlinear Reduced Order Models

    NASA Astrophysics Data System (ADS)

    Rezaian, Elnaz; Wei, Mingjun

    2017-11-01

    It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.

  13. Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    2016-06-01

    Transposition models are widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic (PV) panels. These transposition models have been developed using various assumptions about the distribution of the diffuse radiation, and most of the parameterizations in these models have been developed using hourly ground data sets. Numerous studies have compared the performance of transposition models, but this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty using high-resolution ground measurements in the plane of array. Our results suggest that the amount of aerosol optical depthmore » can affect the accuracy of isotropic models. The choice of empirical coefficients and the use of decomposition models can both result in uncertainty in the output from the transposition models. It is expected that the results of this study will ultimately lead to improvements of the parameterizations as well as the development of improved physical models.« less

  14. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Yanmei; Li, Xinli; Bai, Yan

    The measurement of multiphase flow parameters is of great importance in a wide range of industries. In the measurement of multiphase, the signals from the sensors are extremely weak and often buried in strong background noise. It is thus desirable to develop effective signal processing techniques that can detect the weak signal from the sensor outputs. In this paper, two methods, i.e., lock-in-amplifier (LIA) and improved Duffing chaotic oscillator are compared to detect and process the weak signal. For sinusoidal signal buried in noise, the correlation detection with sinusoidal reference signal is simulated by using LIA. The improved Duffing chaoticmore » oscillator method, which based on the Wigner transformation, can restore the signal waveform and detect the frequency. Two methods are combined to detect and extract the weak signal. Simulation results show the effectiveness and accuracy of the proposed improved method. The comparative analysis shows that the improved Duffing chaotic oscillator method can restrain noise strongly since it is sensitive to initial conditions.« less

  16. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  17. Accuracy of intensity and inclinometer output of three activity monitors for identification of sedentary behavior and light-intensity activity.

    PubMed

    Carr, Lucas J; Mahar, Matthew T

    2012-01-01

    Purpose. To examine the accuracy of intensity and inclinometer output of three physical activity monitors during various sedentary and light-intensity activities. Methods. Thirty-six participants wore three physical activity monitors (ActiGraph GT1M, ActiGraph GT3X+, and StepWatch) while completing sedentary (lying, sitting watching television, sitting using computer, and standing still) light (walking 1.0 mph, pedaling 7.0 mph, pedaling 15.0 mph) intensity activities under controlled settings. Accuracy for correctly categorizing intensity was assessed for each monitor and threshold. Accuracy of the GT3X+ inclinometer function (GT3X+Incl) for correctly identifying anatomical position was also assessed. Percentage agreement between direct observation and the monitor recorded time spent in sedentary behavior and light intensity was examined. Results. All monitors using all thresholds accurately identified over 80% of sedentary behaviors and 60% of light-intensity walking time based on intensity output. The StepWatch was the most accurate in detecting pedaling time but unable to detect pedal workload. The GT3X+Incl accurately identified anatomical position during 70% of all activities but demonstrated limitations in discriminating between activities of differing intensity. Conclusions. Our findings suggest that all three monitors accurately measure most sedentary and light-intensity activities although choice of monitors should be based on study-specific needs.

  18. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    PubMed

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  19. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  20. Light extraction in planar light-emitting diode with nonuniform current injection: model and simulation.

    PubMed

    Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei

    2014-07-20

    We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.

  1. Modelling and simulation of fuel cell dynamics for electrical energy usage of Hercules airplanes.

    PubMed

    Radmanesh, Hamid; Heidari Yazdi, Seyed Saeid; Gharehpetian, G B; Fathi, S H

    2014-01-01

    Dynamics of proton exchange membrane fuel cells (PEMFC) with hydrogen storage system for generating part of Hercules airplanes electrical energy is presented. Feasibility of using fuel cell (FC) for this airplane is evaluated by means of simulations. Temperature change and dual layer capacity effect are considered in all simulations. Using a three-level 3-phase inverter, FC's output voltage is connected to the essential bus of the airplane. Moreover, it is possible to connect FC's output voltage to airplane DC bus alternatively. PID controller is presented to control flow of hydrogen and oxygen to FC and improve transient and steady state responses of the output voltage to load disturbances. FC's output voltage is regulated via an ultracapacitor. Simulations are carried out via MATLAB/SIMULINK and results show that the load tracking and output voltage regulation are acceptable. The proposed system utilizes an electrolyser to generate hydrogen and a tank for storage. Therefore, there is no need for batteries. Moreover, the generated oxygen could be used in other applications in airplane.

  2. Modelling and Simulation of Fuel Cell Dynamics for Electrical Energy Usage of Hercules Airplanes

    PubMed Central

    Radmanesh, Hamid; Heidari Yazdi, Seyed Saeid; Gharehpetian, G. B.; Fathi, S. H.

    2014-01-01

    Dynamics of proton exchange membrane fuel cells (PEMFC) with hydrogen storage system for generating part of Hercules airplanes electrical energy is presented. Feasibility of using fuel cell (FC) for this airplane is evaluated by means of simulations. Temperature change and dual layer capacity effect are considered in all simulations. Using a three-level 3-phase inverter, FC's output voltage is connected to the essential bus of the airplane. Moreover, it is possible to connect FC's output voltage to airplane DC bus alternatively. PID controller is presented to control flow of hydrogen and oxygen to FC and improve transient and steady state responses of the output voltage to load disturbances. FC's output voltage is regulated via an ultracapacitor. Simulations are carried out via MATLAB/SIMULINK and results show that the load tracking and output voltage regulation are acceptable. The proposed system utilizes an electrolyser to generate hydrogen and a tank for storage. Therefore, there is no need for batteries. Moreover, the generated oxygen could be used in other applications in airplane. PMID:24782664

  3. Development and calibration of an accurate 6-degree-of-freedom measurement system with total station

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui

    2016-12-01

    To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.

  4. Modeling and simulation research on electromagnetic and energy-recycled damper based on Adams

    NASA Astrophysics Data System (ADS)

    Zhou, C. F.; Zhang, K.; Zhang, Pengfei

    2018-05-01

    In order to study the voltage and power output characteristics of the electromagnetic and energy-recycled damper which consists of gear, rack and generator, the Adams model of this damper and the Simulink model of generator are established, and the co-simulation is accomplished with these two models. The output indexes such as the gear speed and power of generator are obtained by the simulation, and the simulation results demonstrate that the voltage peak of the damper is 25 V; the maximum output power of the damper is 8 W. The above research provides a basis for the prototype development of electromagnetic and energy-recycled damper with gear and rack.

  5. NREL - SOWFA - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  6. PNNL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  7. ANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  8. LLNL - WRF-LES - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  9. ANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  10. LANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  11. LANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  12. Pulse flux measuring device

    DOEpatents

    Riggan, William C.

    1985-01-01

    A device for measuring particle flux comprises first and second photodiode detectors for receiving flux from a source and first and second outputs for producing first and second signals representing the flux incident to the detectors. The device is capable of reducing the first output signal by a portion of the second output signal, thereby enhancing the accuracy of the device. Devices in accordance with the invention may measure distinct components of flux from a single source or fluxes from several sources.

  13. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  14. An accurate system for onsite calibration of electronic transformers with digital output.

    PubMed

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  15. An accurate system for onsite calibration of electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less

  16. An accurate system for onsite calibration of electronic transformers with digital output

    NASA Astrophysics Data System (ADS)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  17. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.

    PubMed

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.

  18. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*

    PubMed Central

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643

  19. New method of contour-based mask-shape compiler

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka

    2007-10-01

    We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.

  20. CdSe/ZnS quantum dot fluorescence spectra shape-based thermometry via neural network reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, Troy; Laboratory of Soft Matter and Biophysics, Department of Physics and Astronomy, KU Leuven, Celestijnenlaan 200D, B-3001 Heverlee; Liu, Liwang

    As a system of interest gets small, due to the influence of the sensor mass and heat leaks through the sensor contacts, thermal characterization by means of contact temperature measurements becomes cumbersome. Non-contact temperature measurement offers a suitable alternative, provided a reliable relationship between the temperature and the detected signal is available. In this work, exploiting the temperature dependence of their fluorescence spectrum, the use of quantum dots as thermomarkers on the surface of a fiber of interest is demonstrated. The performance is assessed of a series of neural networks that use different spectral shape characteristics as inputs (peak-based—peak intensity,more » peak wavelength; shape-based—integrated intensity, their ratio, full-width half maximum, peak normalized intensity at certain wavelengths, and summation of intensity over several spectral bands) and that yield at their output the fiber temperature in the optically probed area on a spider silk fiber. Starting from neural networks trained on fluorescence spectra acquired in steady state temperature conditions, numerical simulations are performed to assess the quality of the reconstruction of dynamical temperature changes that are photothermally induced by illuminating the fiber with periodically intensity-modulated light. Comparison of the five neural networks investigated to multiple types of curve fits showed that using neural networks trained on a combination of the spectral characteristics improves the accuracy over use of a single independent input, with the greatest accuracy observed for inputs that included both intensity-based measurements (peak intensity) and shape-based measurements (normalized intensity at multiple wavelengths), with an ultimate accuracy of 0.29 K via numerical simulation based on experimental observations. The implications are that quantum dots can be used as a more stable and accurate fluorescence thermometer for solid materials and that use of neural networks for temperature reconstruction improves the accuracy of the measurement.« less

  1. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  2. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological observations such as soil moisture and radar rainfall depth and by sharing the model and its codes in public domain, respectively.

  3. Energy spectra unfolding of fast neutron sources using the group method of data handling and decision tree algorithms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen

    2017-04-01

    Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.

  4. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    PubMed Central

    Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505

  5. Pressure-Transducer Simulator

    NASA Technical Reports Server (NTRS)

    Simon, Richard A.

    1987-01-01

    Simulation circuit operates under remote, automatic, or manual control to produce electrical outputs similar to pressure transducer. Specific circuit designed for simulations of Space Shuttle main engine. General circuit concept adaptable to other simulation and control systems involving several operating modes. Switches and amplifiers respond to external control signals and panel control settings to vary differential excitation of resistive bridge. Output voltage or passive terminal resistance made to equal pressure transducer in any of four operating modes.

  6. Monolithic Microwave Integrated Circuits (MMIC) Broadband Power Amplifiers (Part 2)

    DTIC Science & Technology

    2013-07-01

    2 Figure 2. A 2-GHz load-pull simulation of output power (Pcomp-6 x 65 µm PHEMT). ..............2 Figure 3. A 2-GHz load-pull simulation of PAE (6...5. MMIC 1–5 GHz output power and PAE performance simulation (1, 2, 3, and 4 GHz...load-pull simulation of PAE (6 x 50 µm PHEMT). .......................................7 Figure 9. MMIC 10–19 GHz broadband power amplifier linear

  7. Stochastic Partial Differential Equation Solver for Hydroacoustic Modeling: Improvements to Paracousti Sound Propagation Solver

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2017-12-01

    Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  8. Modeling of a multileaf collimator

    NASA Astrophysics Data System (ADS)

    Kim, Siyong

    A comprehensive physics model of a multileaf collimator (MLC) field for treatment planning was developed. Specifically, an MLC user interface module that includes a geometric optimization tool and a general method of in- air output factor calculation were developed. An automatic tool for optimization of MLC conformation is needed to realize the potential benefits of MLC. It is also necessary that a radiation therapy treatment planning (RTTP) system is capable of modeling MLC completely. An MLC geometric optimization and user interface module was developed. The planning time has been reduced significantly by incorporating the MLC module into the main RTTP system, Radiation Oncology Computer System (ROCS). The dosimetric parameter that has the most profound effect on the accuracy of the dose delivered with an MLC is the change in the in-air output factor that occurs with field shaping. It has been reported that the conventional method of calculating an in-air output factor cannot be used for MLC shaped fields accurately. Therefore, it is necessary to develop algorithms that allow accurate calculation of the in-air output factor. A generalized solution for an in-air output factor calculation was developed. Three major contributors of scatter to the in-air output-flattening filter, wedge, and tertiary collimator-were considered separately. By virtue of a field mapping method, in which a source plane field determined by detector's eye view is mapped into a detector plane field, no additional dosimetric data acquisition other than the standard data set for a range of square fields is required for the calculation of head scatter. Comparisons of in-air output factors between calculated and measured values show a good agreement for both open and wedge fields. For rectangular fields, a simple equivalent square formula was derived based on the configuration of a linear accelerator treatment head. This method predicts in-air output to within 1% accuracy. A two-effective-source algorithm was developed to account for the effect of source to detector distance on in-air output for wedge fields. Two effective sources, one for head scatter and the other for wedge scatter, were dealt with independently. Calculations provided less than 1% difference of in-air output factors from measurements. This approach offers the best comprehensive accuracy in radiation delivery with field shapes defined using MLC. This generalized model works equally well with fields shaped by any type of tertiary collimator and have the necessary framework to extend its application to intensity modulated radiation therapy.

  9. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  10. Inertia coupling analysis of a self-decoupled wheel force transducer under multi-axis acceleration fields.

    PubMed

    Feng, Lihang; Lin, Guoyu; Zhang, Weigong; Dai, Dong

    2015-01-01

    Wheel force transducer (WFT), which measures the three-axis forces and three-axis torques applied to the wheel, is an important instrument in the vehicle testing field and has been extremely promoted by researchers with great interests. The transducer, however, is typically mounted on the wheel of a moving vehicle, especially on a high speed car, when abruptly accelerating or braking, the mass/inertia of the transducer/wheel itself will have an extra effect on the sensor response so that the inertia/mass loads will also be detected and coupled into the signal outputs. The effect which is considered to be inertia coupling problem will decrease the sensor accuracy. In this paper, the inertia coupling of a universal WFT under multi-axis accelerations is investigated. According to the self-decoupling approach of the WFT, inertia load distribution is solved based on the principle of equivalent mass and rotary inertia, thus then inertia impact can be identified with the theoretical derivation. The verification is achieved by FEM simulation and experimental tests. Results show that strains in simulation agree well with the theoretical derivation. The relationship between the applied acceleration and inertia load for both wheel force and moment is the approximate linear, respectively. All the relative errors are less than 5% which are within acceptable and the inertia loads have the maximum impact on the signal output about 1.5% in the measurement range.

  11. Inertia Coupling Analysis of a Self-Decoupled Wheel Force Transducer under Multi-Axis Acceleration Fields

    PubMed Central

    Feng, Lihang; Lin, Guoyu; Zhang, Weigong; Dai, Dong

    2015-01-01

    Wheel force transducer (WFT), which measures the three-axis forces and three-axis torques applied to the wheel, is an important instrument in the vehicle testing field and has been extremely promoted by researchers with great interests. The transducer, however, is typically mounted on the wheel of a moving vehicle, especially on a high speed car, when abruptly accelerating or braking, the mass/inertia of the transducer/wheel itself will have an extra effect on the sensor response so that the inertia/mass loads will also be detected and coupled into the signal outputs. The effect which is considered to be inertia coupling problem will decrease the sensor accuracy. In this paper, the inertia coupling of a universal WFT under multi-axis accelerations is investigated. According to the self-decoupling approach of the WFT, inertia load distribution is solved based on the principle of equivalent mass and rotary inertia, thus then inertia impact can be identified with the theoretical derivation. The verification is achieved by FEM simulation and experimental tests. Results show that strains in simulation agree well with the theoretical derivation. The relationship between the applied acceleration and inertia load for both wheel force and moment is the approximate linear, respectively. All the relative errors are less than 5% which are within acceptable and the inertia loads have the maximum impact on the signal output about 1.5% in the measurement range. PMID:25723492

  12. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  13. Skimming Digits: Neuromorphic Classification of Spike-Encoded Images

    PubMed Central

    Cohen, Gregory K.; Orchard, Garrick; Leng, Sio-Hoi; Tapson, Jonathan; Benosman, Ryad B.; van Schaik, André

    2016-01-01

    The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classification using N-MNIST, a neuromorphic dataset created with a silicon retina, and the Synaptic Kernel Inverse Method (SKIM), a learning method based on principles of dendritic computation. As this work represents the first large-scale and multi-class classification task performed using the SKIM network, it explores different training patterns and output determination methods necessary to extend the original SKIM method to support multi-class problems. Making use of SKIM networks applied to real-world datasets, implementing the largest hidden layer sizes and simultaneously training the largest number of output neurons, the classification system achieved a best-case accuracy of 92.87% for a network containing 10,000 hidden layer neurons. These results represent the highest accuracies achieved against the dataset to date and serve to validate the application of the SKIM method to event-based visual classification tasks. Additionally, the study found that using a square pulse as the supervisory training signal produced the highest accuracy for most output determination methods, but the results also demonstrate that an exponential pattern is better suited to hardware implementations as it makes use of the simplest output determination method based on the maximum value. PMID:27199646

  14. Design of DSP-based high-power digital solar array simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Zhilong; Tong, Weichao; Feng, Jian; Ji, Yibo

    2013-12-01

    To satisfy rigid performance specifications, a feedback control was presented for zoom optical lens plants. With the increasing of global energy consumption, research of the photovoltaic(PV) systems get more and more attention. Research of the digital high-power solar array simulator provides technical support for high-power grid-connected PV systems research.This paper introduces a design scheme of the high-power digital solar array simulator based on TMS320F28335. A DC-DC full-bridge topology was used in the system's main circuit. The switching frequency of IGBT is 25kHz.Maximum output voltage is 900V. Maximum output current is 20A. Simulator can be pre-stored solar panel IV curves.The curve is composed of 128 discrete points .When the system was running, the main circuit voltage and current values was feedback to the DSP by the voltage and current sensors in real-time. Through incremental PI,DSP control the simulator in the closed-loop control system. Experimental data show that Simulator output voltage and current follow a preset solar panels IV curve. In connection with the formation of high-power inverter, the system becomes gridconnected PV system. The inverter can find the simulator's maximum power point and the output power can be stabilized at the maximum power point (MPP).

  15. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  16. On-chip magnetically actuated robot with ultrasonic vibration for single cell manipulations.

    PubMed

    Hagiwara, Masaya; Kawahara, Tomohiro; Yamanishi, Yoko; Masuda, Taisuke; Feng, Lin; Arai, Fumihito

    2011-06-21

    This paper presents an innovative driving method for an on-chip robot actuated by permanent magnets in a microfluidic chip. A piezoelectric ceramic is applied to induce ultrasonic vibration to the microfluidic chip and the high-frequency vibration reduces the effective friction on the MMT significantly. As a result, we achieved 1.1 micrometre positioning accuracy of the microrobot, which is 100 times higher accuracy than without vibration. The response speed is also improved and the microrobot can be actuated with a speed of 5.5 mm s(-1) in 3 degrees of freedom. The novelty of the ultrasonic vibration appears in the output force as well. Contrary to the reduction of friction on the microrobot, the output force increased twice as much by the ultrasonic vibration. Using this high accuracy, high speed, and high power microrobot, swine oocyte manipulations are presented in a microfluidic chip.

  17. Force and torque modelling of drilling simulation for orthopaedic surgery.

    PubMed

    MacAvelia, Troy; Ghasempoor, Ahmad; Janabi-Sharifi, Farrokh

    2014-01-01

    The advent of haptic simulation systems for orthopaedic surgery procedures has provided surgeons with an excellent tool for training and preoperative planning purposes. This is especially true for procedures involving the drilling of bone, which require a great amount of adroitness and experience due to difficulties arising from vibration and drill bit breakage. One of the potential difficulties with the drilling of bone is the lack of consistent material evacuation from the drill's flutes as the material tends to clog. This clogging leads to significant increases in force and torque experienced by the surgeon. Clogging was observed for feed rates greater than 0.5 mm/s and spindle speeds less than 2500 rpm. The drilling simulation systems that have been created to date do not address the issue of drill flute clogging. This paper presents force and torque prediction models that account for this phenomenon. The two coefficients of friction required by these models were determined via a set of calibration experiments. The accuracy of both models was evaluated by an additional set of validation experiments resulting in average R² regression correlation values of 0.9546 and 0.9209 for the force and torque prediction models, respectively. The resulting models can be adopted by haptic simulation systems to provide a more realistic tactile output.

  18. Agreement Between Institutional Measurements and Treatment Planning System Calculations for Basic Dosimetric Parameters as Measured by the Imaging and Radiation Oncology Core-Houston

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas

    Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less

  19. Validation of CT dose-reduction simulation

    PubMed Central

    Massoumzadeh, Parinaz; Don, Steven; Hildebolt, Charles F.; Bae, Kyongtae T.; Whiting, Bruce R.

    2009-01-01

    The objective of this research was to develop and validate a custom computed tomography dose-reduction simulation technique for producing images that have an appearance consistent with the same scan performed at a lower mAs (with fixed kVp, rotation time, and collimation). Synthetic noise is added to projection (sinogram) data, incorporating a stochastic noise model that includes energy-integrating detectors, tube-current modulation, bowtie beam filtering, and electronic system noise. Experimental methods were developed to determine the parameters required for each component of the noise model. As a validation, the outputs of the simulations were compared to measurements with cadavers in the image domain and with phantoms in both the sinogram and image domain, using an unbiased root-mean-square relative error metric to quantify agreement in noise processes. Four-alternative forced-choice (4AFC) observer studies were conducted to confirm the realistic appearance of simulated noise, and the effects of various system model components on visual noise were studied. The “just noticeable difference (JND)” in noise levels was analyzed to determine the sensitivity of observers to changes in noise level. Individual detector measurements were shown to be normally distributed (p>0.54), justifying the use of a Gaussian random noise generator for simulations. Phantom tests showed the ability to match original and simulated noise variance in the sinogram domain to within 5.6%±1.6% (standard deviation), which was then propagated into the image domain with errors less than 4.1%±1.6%. Cadaver measurements indicated that image noise was matched to within 2.6%±2.0%. More importantly, the 4AFC observer studies indicated that the simulated images were realistic, i.e., no detectable difference between simulated and original images (p=0.86) was observed. JND studies indicated that observers’ sensitivity to change in noise levels corresponded to a 25% difference in dose, which is far larger than the noise accuracy achieved by simulation. In summary, the dose-reduction simulation tool demonstrated excellent accuracy in providing realistic images. The methodology promises to be a useful tool for researchers and radiologists to explore dose reduction protocols in an effort to produce diagnostic images with radiation dose “as low as reasonably achievable.” PMID:19235386

  20. The Effects of Practice Modality on Pragmatic Development in L2 Chinese

    ERIC Educational Resources Information Center

    Li, Shuai; Taguchi, Naoko

    2014-01-01

    This study investigated the effects of input-based and output-based practice on the development of accuracy and speed in recognizing and producing request-making forms in L2 Chinese. Fifty American learners of Chinese with intermediate level proficiency were randomly assigned to an input-based training group, an output-based training group, or a…

  1. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  2. High-Voltage, Low-Power BNC Feedthrough Terminator

    NASA Technical Reports Server (NTRS)

    Bearden, Douglas

    2012-01-01

    This innovation is a high-voltage, lowpower BNC (Bayonet Neill-Concelman) feedthrough that enables the user to terminate an instrumentation cable properly while connected to a high voltage, without the use of a voltage divider. This feedthrough is low power, which will not load the source, and will properly terminate the instrumentation cable to the instrumentation, even if the cable impedance is not constant. The Space Shuttle Program had a requirement to measure voltage transients on the orbiter bus through the Ground Lightning Measurement System (GLMS). This measurement has a bandwidth requirement of 1 MHz. The GLMS voltage measurement is connected to the orbiter through a DC panel. The DC panel is connected to the bus through a nonuniform cable that is approximately 75 ft (approximately equal to 23 m) long. A 15-ft (approximately equal to 5-m), 50-ohm triaxial cable is connected between the DC panel and the digitizer. Based on calculations and simulations, cable resonances and reflections due to mismatched impedances of the cable connecting the orbiter bus and the digitizer causes the output not to reflect accurately what is on the bus. A voltage divider at the DC panel, and terminating the 50-ohm cable properly, would eliminate this issue. Due to implementation issues, an alternative design was needed to terminate the cable properly without the use of a voltage divider. Analysis shows how the cable resonances and reflections due to the mismatched impedances of the cable connecting the orbiter bus and the digitizer causes the output not to reflect accurately what is on the bus. After simulating a dampening circuit located at the digitizer, simulations were performed to show how the cable resonances were dampened and the accuracy was improved significantly. Test cables built to verify simulations were accurate. Since the dampening circuit is low power, it can be packaged in a BNC feedthrough.

  3. Evaluation of different gridded rainfall datasets for rainfed wheat yield prediction in an arid environment

    NASA Astrophysics Data System (ADS)

    Lashkari, A.; Salehnia, N.; Asadi, S.; Paymard, P.; Zare, H.; Bannayan, M.

    2018-05-01

    The accuracy of daily output of satellite and reanalysis data is quite crucial for crop yield prediction. This study has evaluated the performance of APHRODITE (Asian Precipitation-Highly-Resolved Observational Data Integration Towards Evaluation), PERSIANN (Rainfall Estimation from Remotely Sensed Information using Artificial Neural Networks), TRMM (Tropical Rainfall Measuring Mission), and AgMERRA (The Modern-Era Retrospective Analysis for Research and Applications) precipitation products to apply as input data for CSM-CERES-Wheat crop growth simulation model to predict rainfed wheat yield. Daily precipitation output from various sources for 7 years (2000-2007) was obtained and compared with corresponding ground-observed precipitation data for 16 ground stations across the northeast of Iran. Comparisons of ground-observed daily precipitation with corresponding data recorded by different sources of datasets showed a root mean square error (RMSE) of less than 3.5 for all data. AgMERRA and APHRODITE showed the highest correlation (0.68 and 0.87) and index of agreement (d) values (0.79 and 0.89) with ground-observed data. When daily precipitation data were aggregated over periods of 10 days, the RMSE values, r, and d values increased (30, 0.8, and 0.7) for AgMERRA, APHRODITE, PERSIANN, and TRMM precipitation data sources. The simulations of rainfed wheat leaf area index (LAI) and dry matter using various precipitation data, coupled with solar radiation and temperature data from observed ones, illustrated typical LAI and dry matter shape across all stations. The average values of LAImax were 0.78, 0.77, 0.74, 0.70, and 0.69 using PERSIANN, AgMERRA, ground-observed precipitation data, APHRODITE, and TRMM. Rainfed wheat grain yield simulated by using AgMERRA and APHRODITE daily precipitation data was highly correlated (r 2 ≥ 70) with those simulated using observed precipitation data. Therefore, gridded data have high potential to be used to supply lack of data and gaps in ground-observed precipitation data.

  4. Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model......Do not return it to the originator. ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory

  5. Planned Destruction of Metal-Core Reactor: Simulation of Catastrophic Accidents and New Experimental Possibilities

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Kuvshinov, M. I.; Narozhnyi, A. T.; Popov, V. A.; Solov'ev, V. P.; Yuferev, V. I.

    2017-12-01

    A reactor with a destructible core (RIR reactor) generating a pulse with an output of 1.5 × 1019 fissions and a full width at half maximum of 2.5 μs was developed and tested at VNIIEF. In the course of investigation, a computational-experimental method for laboratory calibration of the reactor was created and worked out. This method ensures a high accuracy of predicting the energy release in a real experiment with excess reactivity of 3βeff above prompt criticality. A transportable explosion-proof chamber was also developed, which ensures the safe localization of explosion products of the core of small-sized nuclear devices and charges of high explosives with equivalent mass of up to 100 kg of TNT.

  6. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  7. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  8. Variations in laser energy outputs over a series of simulated treatments.

    PubMed

    Lister, T S; Brewin, M P

    2014-10-01

    Test patches are routinely employed to determine the likely efficacy and the risk of adverse effects from cutaneous laser treatments. However, the degree to which these represent a full treatment has not been investigated in detail. This study aimed to determine the variability in pulse-to-pulse output energy from a representative selection of cutaneous laser systems in order to assess the value of laser test patches. The output energies of each pulse from seven cutaneous laser systems were measured using a pyroelectric measurement head over a 2-h period, employing a regime of 10-min simulated treatments followed by a 5-min rest period (between patients). Each laser system appeared to demonstrate a different pattern of variation in output energy per pulse over the period measured. The output energies from a range of cutaneous laser systems have been shown to vary considerably between a representative test patch and a full treatment, and over the course of an entire simulated clinic list. © 2014 British Association of Dermatologists.

  9. Time-Based Readout of a Silicon Photomultiplier (SiPM) for Time of Flight Positron Emission Tomography (TOF-PET)

    NASA Astrophysics Data System (ADS)

    Powolny, F.; Auffray, E.; Brunner, S. E.; Garutti, E.; Goettlich, M.; Hillemanns, H.; Jarron, P.; Lecoq, P.; Meyer, T.; Schultz-Coulon, H. C.; Shen, W.; Williams, M. C. S.

    2011-06-01

    Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Ω input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 × 3 mm2 SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 × 3 × 15 mm3 LSO crystal coupled to a SiPM. The measured time coincidence precision and the limitations in the overall timing accuracy are interpreted using Monte Carlo/SPICE simulation, Poisson statistics, and geometric effects of the crystal.

  10. Investigation of the Impact of the Upstream Induction Zone on LIDAR Measurement Accuracy for Wind Turbine Control Applications using Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Simley, Eric; Y Pao, Lucy; Gebraad, Pieter; Churchfield, Matthew

    2014-06-01

    Several sources of error exist in lidar measurements for feedforward control of wind turbines including the ability to detect only radial velocities, spatial averaging, and wind evolution. This paper investigates another potential source of error: the upstream induction zone. The induction zone can directly affect lidar measurements and presents an opportunity for further decorrelation between upstream wind and the wind that interacts with the rotor. The impact of the induction zone is investigated using the combined CFD and aeroelastic code SOWFA. Lidar measurements are simulated upstream of a 5 MW turbine rotor and the true wind disturbances are found using a wind speed estimator and turbine outputs. Lidar performance in the absence of an induction zone is determined by simulating lidar measurements and the turbine response using the aeroelastic code FAST with wind inputs taken far upstream of the original turbine location in the SOWFA wind field. Results indicate that while measurement quality strongly depends on the amount of wind evolution, the induction zone has little effect. However, the optimal lidar preview distance and circular scan radius change slightly due to the presence of the induction zone.

  11. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  12. Real-time interactive simulation: using touch panels, graphics tablets, and video-terminal keyboards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venhuizen, J.R.

    1983-01-01

    A Simulation Laboratory utilizing only digital computers for interactive computing must rely on CRT based graphics devices for output devices, and keyboards, graphics tablets, and touch panels, etc., for input devices. The devices all work well, with the combination of a CRT with a touch panel mounted on it as the most flexible combination of input/output devices for interactive simulation.

  13. Analysis and Simulation of Disadvantaged Receivers for Multiple-Input Multiple-Output Communications Systems

    DTIC Science & Technology

    2010-09-01

    SIMULATION OF DISADVANTAGED RECEIVERS FOR MULTIPLE-INPUT MULTIPLE- OUTPUT COMMUNICATIONS SYSTEMS by Tracy A. Martin September 2010 Thesis...DATE September 2010 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Analysis and Simulation of Disadvantaged Receivers...Channel State Information at the Transmitter (CSIT). A disadvantaged receiver is subsequently introduced to the system lacking the optimization enjoyed

  14. The simulation of thermal characteristics of 980 nm vertical cavity surface emitting lasers

    NASA Astrophysics Data System (ADS)

    Fang, Tianxiao; Cui, Bifeng; Hao, Shuai; Wang, Yang

    2018-02-01

    In order to design a single mode 980 nm vertical cavity surface emitting laser (VCSEL), a 2 μm output aperture is designed to guarantee the single mode output. The effects of different mesa sizes on the lattice temperature, the output power and the voltage are simulated under the condition of continuous working at room temperature, to obtain the optimum process parameters of mesa. It is obtained by results of the crosslight simulation software that the sizes of mesa radius are between 9.5 to 12.5 μm, which cannot only obtain the maximum output power, but also improve the heat dissipation of the device. Project supported by the Beijing Municipal Eduaction Commission (No. PXM2016_014204_500018) and the Construction of Scientific and Technological Innovation Service Ability in 2017 (No. PXM2017_014204_500034).

  15. A design of calibration single star simulator with adjustable magnitude and optical spectrum output system

    NASA Astrophysics Data System (ADS)

    Hu, Guansheng; Zhang, Tao; Zhang, Xuan; Shi, Gentai; Bai, Haojie

    2018-03-01

    In order to achieve multi-color temperature and multi-magnitude output, magnitude and temperature can real-time adjust, a new type of calibration single star simulator was designed with adjustable magnitude and optical spectrum output in this article. xenon lamp and halogen tungsten lamp were used as light source. The control of spectrum band and temperature of star was realized with different multi-beam narrow band spectrum with light of varying intensity. When light source with different spectral characteristics and color temperature go into the magnitude regulator, the light energy attenuation were under control by adjusting the light luminosity. This method can completely satisfy the requirements of calibration single star simulator with adjustable magnitude and optical spectrum output in order to achieve the adjustable purpose of magnitude and spectrum.

  16. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  17. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process.

    PubMed

    Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.

  18. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process

    PubMed Central

    Mohamed, Amr E.; Dorrah, Hassen T.

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444

  19. Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang

    2016-12-01

    In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.

  20. Astrometry and exoplanets in the Gaia era: a Bayesian approach to detection and parameter recovery

    NASA Astrophysics Data System (ADS)

    Ranalli, P.; Hobbs, D.; Lindegren, L.

    2018-06-01

    The Gaia mission is expected to make a significant contribution to the knowledge of exoplanet systems, both in terms of their number and of their physical properties. We develop Bayesian methods and detection criteria for orbital fitting, and revise the detectability of exoplanets in light of the in-flight properties of Gaia. Limiting ourselves to one-planet systems as a first step of the development, we simulate Gaia data for exoplanet systems over a grid of S/N, orbital period, and eccentricity. The simulations are then fit using Markov chain Monte Carlo methods. We investigate the detection rate according to three information criteria and the Δχ2. For the Δχ2, the effective number of degrees of freedom depends on the mission length. We find that the choice of the Markov chain starting point can affect the quality of the results; we therefore consider two limit possibilities: an ideal case, and a very simple method that finds the starting point assuming circular orbits. We use 6644 and 4402 simulations to assess the fraction of false positive detections in a 5 yr and in a 10 yr mission, respectively; and 4968 and 4706 simulations to assess the detection rate and how the parameters are recovered. Using Jeffreys' scale of evidence, the fraction of false positives passing a strong evidence criterion is ≲0.2% (0.6%) when considering a 5 yr (10 yr) mission and using the Akaike information criterion or the Watanabe-Akaike information criterion, and <0.02% (<0.06%) when using the Bayesian information criterion. We find that there is a 50% chance of detecting a planet with a minimum S/N = 2.3 (1.7). This sets the maximum distance to which a planet is detectable to 70 pc and 3.5 pc for a Jupiter-mass and Neptune-mass planets, respectively, assuming a 10 yr mission, a 4 au semi-major axis, and a 1 M⊙ star. We show the distribution of the accuracy and precision with which orbital parameters are recovered. The period is the orbital parameter that can be determined with the best accuracy, with a median relative difference between input and output periods of 4.2% (2.9%) assuming a 5 yr (10 yr) mission. The median accuracy of the semi-major axis of the orbit can be recovered with a median relative error of 7% (6%). The eccentricity can also be recovered with a median absolute accuracy of 0.07 (0.06).

  1. GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA

    NASA Technical Reports Server (NTRS)

    Stark, M.

    1994-01-01

    Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.

  2. Time Triggered Ethernet System Testing Means and Method

    NASA Technical Reports Server (NTRS)

    Smithgall, William Todd (Inventor); Hall, Brendan (Inventor); Varadarajan, Srivatsan (Inventor)

    2014-01-01

    Methods and apparatus are provided for evaluating the performance of a Time Triggered Ethernet (TTE) system employing Time Triggered (TT) communication. A real TTE system under test (SUT) having real input elements communicating using TT messages with output elements via one or more first TTE switches during a first time interval schedule established for the SUT. A simulation system is also provided having input simulators that communicate using TT messages via one or more second TTE switches with the same output elements during a second time interval schedule established for the simulation system. The first and second time interval schedules are off-set slightly so that messages from the input simulators, when present, arrive at the output elements prior to messages from the analogous real inputs, thereby having priority over messages from the real inputs and causing the system to operate based on the simulated inputs when present.

  3. Monte Carlo simulation for Neptun 10 PC medical linear accelerator and calculations of output factor for electron beam

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad

    2012-01-01

    Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010

  4. 40 CFR 63.4967 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...

  5. 40 CFR 63.4967 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...

  6. 40 CFR 63.4967 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...

  7. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging.

    PubMed

    Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo

    2015-02-01

    The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV -1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV -1 . The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10 -5 μm -1 . The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10 -5 μm -1 . To the author's knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure. © 2015 American Association of Physicists in Medicine.

  8. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging.

    PubMed

    Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo

    2015-02-01

    The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV−1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV−1. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10−5μm−1. The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10−5μm−1. To the author’s knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.

  9. A Depolarisation Lidar Based Method for the Determination of Liquid-Cloud Microphysical Properties.

    NASA Astrophysics Data System (ADS)

    Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; De Roode, S. R.; Siebesma, P.

    2014-12-01

    The fact that polarisation lidars measure a multiple-scattering induced depolarisation signal in liquid clouds is well-known. The depolarisation signal depends on the lidar characteristics (e.g. wavelength and field-of-view) as well as the cloud properties (e.g. liquid water content (LWC) and cloud droplet number concentration (CDNC)). Previous efforts seeking to use depolarisation information in a quantitative manner to retrieve cloud properties have been undertaken with, arguably, limited practical success. In this work we present a retrieval procedure applicable to clouds with (quasi-)linear LWC profiles and (quasi-)constant CDNC in the cloud base region. Limiting the applicability of the procedure in this manner allows us to reduce the cloud variables to two parameters (namely liquid water content lapse-rate and the CDNC). This simplification, in turn, allows us to employ a robust optimal-estimation inversion using pre-computed look-up-tables produced using lidar Monte-Carlo multiple-scattering simulations. Here, we describe the theory behind the inversion procedure and apply it to simulated observations based on large-eddy simulation model output. The inversion procedure is then applied to actual depolarisation lidar data covering to a range of cases taken from the Cabauw measurement site in the central Netherlands. The lidar results were then used to predict the corresponding cloud-base region radar reflectivities. In non-drizzling condition, it was found that the lidar inversion results can be used to predict the observed radar reflectivities with an accuracy within the radar calibration uncertainty (2-3 dBZ). This result strongly supports the accuracy of the lidar inversion results. Results of a comparison between ground-based aerosol number concentration and lidar-derived CDNC are also presented. The results are seen to be consistent with previous studies based on aircraft-based in situ measurements.

  10. Battery Energy Storage Systems to Mitigate the Variability of Photovoltaic Power Generation

    NASA Astrophysics Data System (ADS)

    Gurganus, Heath Alan

    Methods of generating renewable energy such as through solar photovoltaic (PV) cells and wind turbines offer great promise in terms of a reduced carbon footprint and overall impact on the environment. However, these methods also share the attribute of being highly stochastic, meaning they are variable in such a way that is difficult to forecast with sufficient accuracy. While solar power currently constitutes a small amount of generating potential in most regions, the cost of photovoltaics continues to decline and a trend has emerged to build larger PV plants than was once feasible. This has brought the matter of increased variability to the forefront of research in the industry. Energy storage has been proposed as a means of mitigating this increased variability --- and thus reducing the need to utilize traditional spinning reserves --- as well as offering auxiliary grid services such as peak-shifting and frequency control. This thesis addresses the feasibility of using electrochemical storage methods (i.e. batteries) to decrease the ramp rates of PV power plants. By building a simulation of a grid-connected PV array and a typical Battery Energy Storage System (BESS) in the NetLogo simulation environment, I have created a parameterized tool that can be tailored to describe almost any potential PV setup. This thesis describes the design and function of this model, and makes a case for the accuracy of its measurements by comparing its simulated output to that of well-documented real world sites. Finally, a set of recommendations for the design and operational parameters of such a system are then put forth based on the results of several experiments performed using this model.

  11. Extended interaction oversized coaxial relativistic klystron amplifier with gigawatt-level output at Ka band

    NASA Astrophysics Data System (ADS)

    Li, Shifeng; Duan, Zhaoyun; Huang, Hua; Liu, Zhenbang; He, Hu; Wang, Fei; Wang, Zhanliang; Gong, Yubin

    2018-04-01

    In this paper, an extended interaction oversized coaxial relativistic klystron amplifier (EIOC-RKA) with Gigawatt-level output at Ka band is proposed. We introduce the oversized coaxial and multi-gap resonant cavities to increase the power capacity and investigate a non-uniform extended interaction output cavity to improve the electronic efficiency of the EIOC-RKA. We develop a high order mode gap in the input and output cavities to easily design and fabricate the input and output couplers. Meanwhile, we design the EIOC-RKA by using the particle-in-cell simulation. In the simulations, we use an electron beam with a current of 6 kA and a voltage of 525 kV, which is focused by a low focusing magnetic flux intensity of 0.5 T. The simulation results demonstrate that the saturated output power is 1.17 GW, the electronic efficiency is 37.1%, and the saturated gain is 57 dB at 30 GHz. The self-oscillation is suppressed by adopting the absorbing materials. The proposed EIOC-RKA has plenty of advantages such as large power capacity, high electronic efficiency, low focusing magnetic, high gain, and simple structure.

  12. Use of output from high-resolution atmospheric models in landscape-scale hydrologic models: An assessment

    USGS Publications Warehouse

    Hostetler, S.W.; Giorgi, F.

    1993-01-01

    In this paper we investigate the feasibility of coupling regional climate models (RCMs) with landscape-scale hydrologic models (LSHMs) for studies of the effects of climate on hydrologic systems. The RCM used is the National Center for Atmospheric Research/Pennsylvania State University mesoscale model (MM4). Output from two year-round simulations (1983 and 1988) over the western United States is used to drive a lake model for Pyramid Lake in Nevada and a streamfiow model for Steamboat Creek in Oregon. Comparisons with observed data indicate that MM4 is able to produce meteorologic data sets that can be used to drive hydrologic models. Results from the lake model simulations indicate that the use of MM4 output produces reasonably good predictions of surface temperature and evaporation. Results from the streamflow simulations indicate that the use of MM4 output results in good simulations of the seasonal cycle of streamflow, but deficiencies in simulated wintertime precipitation resulted in underestimates of streamflow and soil moisture. Further work with climate (multiyear) simulations is necessary to achieve a complete analysis, but the results from this study indicate that coupling of LSHMs and RCMs may be a useful approach for evaluating the effects of climate change on hydrologic systems.

  13. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  14. Characterization of nonlinear ultrasound fields of 2D therapeutic arrays

    PubMed Central

    Yuldashev, Petr V.; Kreider, Wayne; Sapozhnikov, Oleg A.; Farr, Navid; Partanen, Ari; Bailey, Michael R.; Khokhlova, Vera

    2015-01-01

    A current trend in high intensity focused ultrasound (HIFU) technologies is to use 2D focused phased arrays that enable electronic steering of the focus, beamforming to avoid overheating of obstacles (such as ribs), and better focusing through inhomogeneities of soft tissue using time reversal methods. In many HIFU applications, the acoustic intensity in situ can reach thousands of W/cm2 leading to nonlinear propagation effects. At high power outputs, shock fronts develop in the focal region and significantly alter the bioeffects induced. Clinical applications of HIFU are relatively new and challenges remain for ensuring their safety and efficacy. A key component of these challenges is the lack of standard procedures for characterizing nonlinear HIFU fields under operating conditions. Methods that combine low-amplitude pressure measurements and nonlinear modeling of the pressure field have been proposed for axially symmetric single element transducers but have not yet been validated for the much more complex 3D fields generated by therapeutic arrays. Here, the method was tested for a clinical HIFU source comprising a 256-element transducer array. A numerical algorithm based on the Westervelt equation was used to enable 3D full-diffraction nonlinear modeling. With the acoustic holography method, the magnitude and phase of the acoustic field were measured at a low power output and used to determine the pattern of vibrations at the surface of the array. This pattern was then scaled to simulate a range of intensity levels near the elements up to 10 W/cm2. The accuracy of modeling was validated by comparison with direct measurements of the focal waveforms using a fiber-optic hydrophone. Simulation results and measurements show that shock fronts with amplitudes up to 100 MPa were present in focal waveforms at clinically relevant outputs, indicating the importance of strong nonlinear effects in ultrasound fields generated by HIFU arrays. PMID:26203345

  15. Cardiac surgery productivity and throughput improvements.

    PubMed

    Lehtonen, Juha-Matti; Kujala, Jaakko; Kouri, Juhani; Hippeläinen, Mikko

    2007-01-01

    The high variability in cardiac surgery length--is one of the main challenges for staff managing productivity. This study aims to evaluate the impact of six interventions on open-heart surgery operating theatre productivity. A discrete operating theatre event simulation model with empirical operation time input data from 2603 patients is used to evaluate the effect that these process interventions have on the surgery output and overtime work. A linear regression model was used to get operation time forecasts for surgery scheduling while it also could be used to explain operation time. A forecasting model based on the linear regression of variables available before the surgery explains 46 per cent operating time variance. The main factors influencing operation length were type of operation, redoing the operation and the head surgeon. Reduction of changeover time between surgeries by inducing anaesthesia outside an operating theatre and by reducing slack time at the end of day after a second surgery have the strongest effects on surgery output and productivity. A more accurate operation time forecast did not have any effect on output, although improved operation time forecast did decrease overtime work. A reduction in the operation time itself is not studied in this article. However, the forecasting model can also be applied to discover which factors are most significant in explaining variation in the length of open-heart surgery. The challenge in scheduling two open-heart surgeries in one day can be partly resolved by increasing the length of the day, decreasing the time between two surgeries or by improving patient scheduling procedures so that two short surgeries can be paired. A linear regression model is created in the paper to increase the accuracy of operation time forecasting and to identify factors that have the most influence on operation time. A simulation model is used to analyse the impact of improved surgical length forecasting and five selected process interventions on productivity in cardiac surgery.

  16. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  17. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  18. Procedures for Instructional Systems Development

    DTIC Science & Technology

    1981-09-18

    single faults to the circuit and components level. (JTI Task No. TCB-01). Figure III-ll.--Example of a Module Page of a Curriculum Outline. 3 - 80...semiconductor trapezoidal wave generator circuit , multimeter, and oscilloscope measure the output amplitude, rise time, and jump voltage within +/- 10...accuracy. Given a trainer having a semiconductor trapezoidal wave generator circuit , multimeter, and oscilloscope - CONDITION (C) . measure the output

  19. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  20. A voice-input voice-output communication aid for people with severe speech impairment.

    PubMed

    Hawley, Mark S; Cunningham, Stuart P; Green, Phil D; Enderby, Pam; Palmer, Rebecca; Sehgal, Siddharth; O'Neill, Peter

    2013-01-01

    A new form of augmentative and alternative communication (AAC) device for people with severe speech impairment-the voice-input voice-output communication aid (VIVOCA)-is described. The VIVOCA recognizes the disordered speech of the user and builds messages, which are converted into synthetic speech. System development was carried out employing user-centered design and development methods, which identified and refined key requirements for the device. A novel methodology for building small vocabulary, speaker-dependent automatic speech recognizers with reduced amounts of training data, was applied. Experiments showed that this method is successful in generating good recognition performance (mean accuracy 96%) on highly disordered speech, even when recognition perplexity is increased. The selected message-building technique traded off various factors including speed of message construction and range of available message outputs. The VIVOCA was evaluated in a field trial by individuals with moderate to severe dysarthria and confirmed that they can make use of the device to produce intelligible speech output from disordered speech input. The trial highlighted some issues which limit the performance and usability of the device when applied in real usage situations, with mean recognition accuracy of 67% in these circumstances. These limitations will be addressed in future work.

  1. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  2. A 3D radiative transfer model based on lidar data and its application on hydrological and ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Li, W.; Su, Y.; Harmon, T. C.; Guo, Q.

    2013-12-01

    Light Detection and Ranging (lidar) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant object. Due to its ability to generate 3-dimensional data with high spatial resolution and accuracy, lidar technology is being increasingly used in ecology, geography, geology, geomorphology, seismology, remote sensing, and atmospheric physics. In this study we construct a 3-dimentional (3D) radiative transfer model (RTM) using lidar data to simulate the spatial distribution of solar radiation (direct and diffuse) on the surface of water and mountain forests. The model includes three sub-models: a light model simulating the light source, a sensor model simulating the camera, and a scene model simulating the landscape. We use ground-based and airborne lidar data to characterize the 3D structure of the study area, and generate a detailed 3D scene model. The interactions between light and object are simulated using the Monte Carlo Ray Tracing (MCRT) method. A large number of rays are generated from the light source. For each individual ray, the full traveling path is traced until it is absorbed or escapes from the scene boundary. By locating the sensor at different positions and directions, we can simulate the spatial distribution of solar energy at the ground, vegetation and water surfaces. These outputs can then be incorporated into meteorological drivers for hydrologic and energy balance models to improve our understanding of hydrologic processes and ecosystem functions.

  3. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  4. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  5. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, M. H.; Giebel, G.; Nielsen, T. S.; Hahmann, A.; Sørensen, P.; Madsen, H.

    2012-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the working title "Integrated Wind Power Planning Tool". The project commenced October 1, 2011, and the goal is to integrate a numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. With regard to the latter, one such simulation tool has been developed at the Wind Energy Division, Risø DTU, intended for long term power system planning. As part of the PSO project the inferior NWP model used at present will be replaced by the state-of-the-art Weather Research & Forecasting (WRF) model. Furthermore, the integrated simulation tool will be improved so it can handle simultaneously 10-50 times more turbines than the present ~ 300, as well as additional atmospheric parameters will be included in the model. The WRF data will also be input for a statistical short term prediction model to be developed in collaboration with ENFOR A/S; a danish company that specialises in forecasting and optimisation for the energy sector. This integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated prediction tool constitute scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator, and the need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2020, from the current 20%.

  6. Using the Real-Ear-to-Coupler Difference within the American Academy of Audiology Pediatric Amplification Guideline: Protocols for Applying and Predicting Earmold RECDs.

    PubMed

    Moodie, Sheila; Pietrobon, Jonathan; Rall, Eileen; Lindley, George; Eiten, Leisha; Gordey, Dave; Davidson, Lisa; Moodie, K Shane; Bagatto, Marlene; Haluschak, Meredith Magathan; Folkeard, Paula; Scollie, Susan

    2016-03-01

    Real-ear-to-coupler difference (RECD) measurements are used for the purposes of estimating degree and configuration of hearing loss (in dB SPL ear canal) and predicting hearing aid output from coupler-based measures. Accurate measurements of hearing threshold, derivation of hearing aid fitting targets, and predictions of hearing aid output in the ear canal assume consistent matching of RECD coupling procedure (i.e., foam tip or earmold) with that used during assessment and in verification of the hearing aid fitting. When there is a mismatch between these coupling procedures, errors are introduced. The goal of this study was to quantify the systematic difference in measured RECD values obtained when using a foam tip versus an earmold with various tube lengths. Assuming that systematic errors exist, the second goal was to investigate the use of a foam tip to earmold correction for the purposes of improving fitting accuracy when mismatched RECD coupling conditions occur (e.g., foam tip at assessment, earmold at verification). Eighteen adults and 17 children (age range: 3-127 mo) participated in this study. Data were obtained using simulated ears of various volumes and earmold tubing lengths and from patients using their own earmolds. Derived RECD values based on simulated ear measurements were compared with RECD values obtained for adult and pediatric ears for foam tip and earmold coupling. Results indicate that differences between foam tip and earmold RECDs are consistent across test ears for adults and children which support the development of a correction between foam tip and earmold couplings for RECDs that can be applied across individuals. The foam tip to earmold correction values developed in this study can be used to provide improved estimations of earmold RECDs. This may support better accuracy in acoustic transforms related to transforming thresholds and/or hearing aid coupler responses to ear canal sound pressure level for the purposes of fitting behind-the-ear hearing aids. American Academy of Audiology.

  7. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  8. Programmable pulse generator based on programmable logic and direct digital synthesis.

    PubMed

    Suchenek, M; Starecki, T

    2012-12-01

    The paper presents a new approach of pulse generation which results in both wide range tunability and high accuracy of the output pulses. The concept is based on the use of programmable logic and direct digital synthesis. The programmable logic works as a set of programmable counters, while direct digital synthesis (DDS) as the clock source. Use of DDS as the clock source results in stability of the output pulses comparable to the stability of crystal oscillators and quasi-continuous tuning of the output frequency.

  9. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  10. 40 CFR 63.4568 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...

  11. 40 CFR 63.4568 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...

  12. 40 CFR 63.4568 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...

  13. 40 CFR 63.3968 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...

  14. 40 CFR 63.3968 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...

  15. 40 CFR 63.3968 - What are the requirements for continuous parameter monitoring system installation, operation, and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...

  16. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  17. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  18. Soil Moisture Data Assimilation in the NASA Land Information System for Local Modeling Applications and Improved Situational Awareness

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Blakenship, Clay B.; Zavodsky, Bradley T.

    2014-01-01

    As part of the NASA Soil Moisture Active Passive (SMAP) Early Adopter (EA) program, the NASA Shortterm Prediction Research and Transition (SPoRT) Center has implemented a data assimilation (DA) routine into the NASA Land Information System (LIS) for soil moisture retrievals from the European Space Agency's Soil Moisture Ocean Salinity (SMOS) satellite. The SMAP EA program promotes application-driven research to provide a fundamental understanding of how SMAP data products will be used to improve decision-making at operational agencies. SPoRT has partnered with select NOAA/NWS Weather Forecast Offices (WFOs) that use output from a real-time regional configuration of LIS, without soil moisture DA, to initialize local numerical weather prediction (NWP) models and enhance situational awareness. Improvements to local NWP with the current LIS have been demonstrated; however, a better representation of the land surface through assimilation of SMOS (and eventually SMAP) retrievals is expected to lead to further model improvement, particularly during warm-season months. SPoRT will collaborate with select WFOs to assess the impact of soil moisture DA on operational forecast situations. Assimilation of the legacy SMOS instrument data provides an opportunity to develop expertise in preparation for using SMAP data products shortly after the scheduled launch on 5 November 2014. SMOS contains a passive L-band radiometer that is used to retrieve surface soil moisture at 35-km resolution with an accuracy of 0.04 cu cm cm (exp -3). SMAP will feature a comparable passive L-band instrument in conjunction with a 3-km resolution active radar component of slightly degraded accuracy. A combined radar-radiometer product will offer unprecedented global coverage of soil moisture at high spatial resolution (9 km) for hydrometeorological applications, balancing the resolution and accuracy of the active and passive instruments, respectively. The LIS software framework manages land surface model (LSM) simulations and includes an Ensemble Kalman Filter for conducting land surface DA. SPoRT has added a module to read, quality-control and bias-correct swaths of Level II SMOS soil moisture retrievals prior to assimilation within LIS. The impact of SMOS DA is being tested using the Noah LSM. Experiments are being conducted to examine the impacts of SMOS soil moisture DA on the resulting LISNoah fields and subsequent NWP simulations using the Weather Research and Forecasting (WRF) model initialized with LIS-Noah output. LIS-Noah soil moisture will be validated against in situ observations from Texas A&M's North American Soil Moisture Database to reveal the impact and possible improvement in soil moisture trends through DA. WRF model NWP case studies will test the impacts of DA on the simulated near-surface and boundary-layer environments, and precipitation during both quiescent and disturbed weather scenarios. Emphasis will be placed on cases with large analysis increments, especially due to contributions from regional irrigation patterns that are not represented by precipitation input in the baseline LIS-Noah run. This poster presentation will describe the soil moisture DA methodology and highlight LIS-Noah and WRF simulation results with and without assimilation.

  19. Evaluation of the Effect of Source Geometry on the Output of Miniature X-ray Tube for Electronic Brachytherapy through Simulation

    PubMed Central

    Barati, B.; Zabihzadeh, M.; Tahmasebi Birgani, M.J.; Chegini, N.; Fatahiasl, J.; Mirr, I.

    2018-01-01

    Objective: The use of miniature X-ray source in electronic brachytherapy is on the rise so there is an urgent need to acquire more knowledge on X-ray spectrum production and distribution by a dose. The aim of this research was to investigate the influence of target thickness and geometry at the source of miniature X-ray tube on tube output. Method: Five sources were simulated based on problems each with a specific geometric structure and conditions using MCNPX code. Tallies proportional to the output were used to calculate the results for the influence of source geometry on output. Results: The results of this work include the size of the optimal thickness of 5 miniature sources, energy spectrum of the sources per 50 kev and also the axial and transverse dose of simulated sources were calculated based on these thicknesses. The miniature source geometric was affected on the output x-ray tube. Conclusion: The result of this study demonstrates that hemispherical-conical, hemispherical and truncated-conical miniature sources were determined as the most suitable tools. PMID:29732338

  20. Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2011-06-01

    It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less

  1. Grid Integrated Distributed PV (GridPV) Version 2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reno, Matthew J.; Coogan, Kyle

    2014-12-01

    This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less

  2. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  3. Determination of output factors for small proton therapy fields.

    PubMed

    Fontenot, Jonas D; Newhauser, Wayne D; Bloch, Charles; White, R Allen; Titt, Uwe; Starkschall, George

    2007-02-01

    Current protocols for the measurement of proton dose focus on measurements under reference conditions; methods for measuring dose under patient-specific conditions have not been standardized. In particular, it is unclear whether dose in patient-specific fields can be determined more reliably with or without the presence of the patient-specific range compensator. The aim of this study was to quantitatively assess the reliability of two methods for measuring dose per monitor unit (DIMU) values for small-field treatment portals: one with the range compensator and one without the range compensator. A Monte Carlo model of the Proton Therapy Center-Houston double-scattering nozzle was created, and estimates of D/MU values were obtained from 14 simulated treatments of a simple geometric patient model. Field-specific D/MU calibration measurements were simulated with a dosimeter in a water phantom with and without the range compensator. D/MU values from the simulated calibration measurements were compared with D/MU values from the corresponding treatment simulation in the patient model. To evaluate the reliability of the calibration measurements, six metrics and four figures of merit were defined to characterize accuracy, uncertainty, the standard deviations of accuracy and uncertainty, worst agreement, and maximum uncertainty. Measuring D/MU without the range compensator provided superior results for five of the six metrics and for all four figures of merit. The two techniques yielded different results primarily because of high-dose gradient regions introduced into the water phantom when the range compensator was present. Estimated uncertainties (approximately 1 mm) in the position of the dosimeter in these regions resulted in large uncertainties and high variability in D/MU values. When the range compensator was absent, these gradients were minimized and D/MU values were less sensitive to dosimeter positioning errors. We conclude that measuring D/MU without the range compensator present provides more reliable results than measuring it with the range compensator in place.

  4. Location Accuracy of INS/Gravity-Integrated Navigation System on the Basis of Ocean Experiment and Simulation

    PubMed Central

    Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong

    2017-01-01

    An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136

  5. Location Accuracy of INS/Gravity-Integrated Navigation System on the Basis of Ocean Experiment and Simulation.

    PubMed

    Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong

    2017-12-20

    An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.

  6. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  7. Bipartite units of nonlocality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, Manuel; Wolf, Stefan

    Imagine a task in which a group of separated players aim to simulate a statistic that violates a Bell inequality. Given measurement choices the players shall announce an output based solely on the results of local operations--which they can discuss before the separation--on shared random data and shared copies of a so-called unit correlation. In the first part of this paper we show that in such a setting the simulation of any bipartite correlation, not containing the possibility of signaling, can be made arbitrarily accurate by increasing the number of shared Popescu-Rohrlich (PR) boxes. This establishes the PR box asmore » a simple asymptotic unit of bipartite nonlocality. In the second part we study whether this property extends to the multipartite case. More generally, we ask if it is possible for separated players to asymptotically reproduce any nonsignaling statistic by local operations on bipartite unit correlations. We find that nonadaptive strategies are limited by a constant accuracy and that arbitrary strategies on n resource correlations make a mistake with a probability greater or equal to c/n, for some constant c.« less

  8. Using quantum theory to simplify input-output processes

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile

    2017-02-01

    All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.

  9. Logarithmic current measurement circuit with improved accuracy and temperature stability and associated method

    DOEpatents

    Ericson, M. Nance; Rochelle, James M.

    1994-01-01

    A logarithmic current measurement circuit for operating upon an input electric signal utilizes a quad, dielectrically isolated, well-matched, monolithic bipolar transistor array. One group of circuit components within the circuit cooperate with two transistors of the array to convert the input signal logarithmically to provide a first output signal which is temperature-dependant, and another group of circuit components cooperate with the other two transistors of the array to provide a second output signal which is temperature-dependant. A divider ratios the first and second output signals to provide a resultant output signal which is independent of temperature. The method of the invention includes the operating steps performed by the measurement circuit.

  10. Forecasting of Storm Surge Floods Using ADCIRC and Optimized DEMs

    NASA Technical Reports Server (NTRS)

    Valenti, Elizabeth; Fitzpatrick, Patrick

    2005-01-01

    Increasing the accuracy of storm surge flood forecasts is essential for improving preparedness for hurricanes and other severe storms and, in particular, for optimizing evacuation scenarios. An interactive database, developed by WorldWinds, Inc., contains atlases of storm surge flood levels for the Louisiana/Mississippi gulf coast region. These atlases were developed to improve forecasting of flooding along the coastline and estuaries and in adjacent inland areas. Storm surge heights depend on a complex interaction of several factors, including: storm size, central minimum pressure, forward speed of motion, bottom topography near the point of landfall, astronomical tides, and most importantly, maximum wind speed. The information in the atlases was generated in over 100 computational simulations, partly by use of a parallel-processing version of the ADvanced CIRCulation (ADCIRC) model. ADCIRC is a nonlinear computational model of hydrodynamics, developed by the U.S. Army Corps of Engineers and the US Navy, as a family of two- and three-dimensional finite element based codes. It affords a capability for simulating tidal circulation and storm surge propagation over very large computational domains, while simultaneously providing high-resolution output in areas of complex shoreline and bathymetry. The ADCIRC finite-element grid for this project covered the Gulf of Mexico and contiguous basins, extending into the deep Atlantic Ocean with progressively higher resolution approaching the study area. The advantage of using ADCIRC over other storm surge models, such as SLOSH, is that input conditions can include all or part of wind stress, tides, wave stress, and river discharge, which serve to make the model output more accurate.

  11. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  12. Retrodiction for Bayesian multiple-hypothesis/multiple-target tracking in densely cluttered environment

    NASA Astrophysics Data System (ADS)

    Koch, Wolfgang

    1996-05-01

    Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.

  13. Pulse-modulated dual-gas control subsystem for space cabin atmosphere

    NASA Technical Reports Server (NTRS)

    Jackson, J. K.

    1974-01-01

    An atmosphere control subsystem (ACS) was developed for use in a closed manned cabin, such as the Space Shuttle Orbiter. This subsystem uses the Perkin Elmer mass spectrometer for continuous measurement of major atmospheric constituents (H2, H2O, N2, O2, and CO2). The O2 and N2 analog signals are used as inputs to the controller, which produces a pulse-frequency-modulated output to operate the N2 gas admission solenoid valve and an on-off signal to operate the O2 valve. The proportional controller characteristic results in improved control accuracy as compared with previously used on-off controllers having significant dead-band. A 60-day evaluation test was performed on the ACS during which operation was measured at various values of control setpoint and simulated cabin leakage.

  14. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.

  15. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. A PC-based generator of surface ECG potentials for computer electrocardiograph testing.

    PubMed

    Franchi, D; Palagi, G; Bedini, R

    1994-02-01

    The system is composed of an electronic circuit, connected to a PC, whose outputs, starting from ECGs digitally collected by commercial interpretative electrocardiographs, simulate virtual patients' limb and chest electrode potentials. Appropriate software manages the D/A conversion and lines up the original short-term signal in a ring buffer to generate continuous ECG traces. The device also permits the addition of artifacts and/or baseline wanders/shifts on each lead separately. The system has been accurately tested and statistical indexes have been computed to quantify the reproduction accuracy analyzing, in the generated signal, both the errors induced on the fiducial point measurements and the capability to retain the diagnostic significance. The device integrated with an annotated ECG data base constitutes a reliable and powerful system to be used in the quality assurance testing of computer electrocardiographs.

  17. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  18. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework

    PubMed Central

    Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.

    2015-01-01

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765

  19. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.

    PubMed

    Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A

    2015-02-21

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.

  20. A direct method for calculating instrument noise levels in side-by-side seismometer evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1989-01-01

    The subject of determining the inherent system noise levels present in modem broadband closed loop seismic sensors has been an evolving topic ever since closed loop systems became available. Closed loop systems are unique in that the system noise can not be determined via a blocked mass test as in older conventional open loop seismic sensors. Instead, most investigators have resorted to performing measurements on two or more systems operating in close proximity to one another and to analyzing the outputs of these systems with respect to one another to ascertain their relative noise levels.The analysis of side-by-side relative performance is inherently dependent on the accuracy of the mathematical modeling of the test configuration. This report presents a direct approach to extracting the system noise levels of two linear systems with a common coherent input signal. The mathematical solution to the problem is incredibly simple; however the practical application of the method encounters some difficulties. Examples of expected accuracies are presented as derived by simulating real systems performance using computer generated random noise. In addition, examples of the performance of the method when applied to real experimental test data are shown.

  1. A first-principle model of 300 mm Czochralski single-crystal Si production process for predicting crystal radius and crystal growth rate

    NASA Astrophysics Data System (ADS)

    Zheng, Zhongchao; Seto, Tatsuru; Kim, Sanghong; Kano, Manabu; Fujiwara, Toshiyuki; Mizuta, Masahiko; Hasebe, Shinji

    2018-06-01

    The Czochralski (CZ) process is the dominant method for manufacturing large cylindrical single-crystal ingots for the electronics industry. Although many models and control methods for the CZ process have been proposed, they were only tested with small equipment and only a few industrial application were reported. In this research, we constructed a first-principle model for controlling industrial CZ processes that produce 300 mm single-crystal silicon ingots. The developed model, which consists of energy, mass balance, hydrodynamic, and geometrical equations, calculates the crystal radius and the crystal growth rate as output variables by using the heater input, the crystal pulling rate, and the crucible rise rate as input variables. To improve accuracy, we modeled the CZ process by considering factors such as changes in the positions of the crucible and the melt level. The model was validated with the operation data from an industrial 300 mm CZ process. We compared the calculated and actual values of the crystal radius and the crystal growth rate, and the results demonstrated that the developed model simulated the industrial process with high accuracy.

  2. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  3. GROSS- GAMMA RAY OBSERVATORY ATTITUDE DYNAMICS SIMULATOR

    NASA Technical Reports Server (NTRS)

    Garrick, J.

    1994-01-01

    The Gamma Ray Observatory (GRO) spacecraft will constitute a major advance in gamma ray astronomy by offering the first opportunity for comprehensive observations in the range of 0.1 to 30,000 megaelectronvolts (MeV). The Gamma Ray Observatory Attitude Dynamics Simulator, GROSS, is designed to simulate this mission. The GRO Dynamics Simulator consists of three separate programs: the Standalone Profile Program; the Simulator Program, which contains the Simulation Control Input/Output (SCIO) Subsystem, the Truth Model (TM) Subsystem, and the Onboard Computer (OBC) Subsystem; and the Postprocessor Program. The Standalone Profile Program models the environment of the spacecraft and generates a profile data set for use by the simulator. This data set contains items such as individual external torques; GRO spacecraft, Tracking and Data Relay Satellite (TDRS), and solar and lunar ephemerides; and star data. The Standalone Profile Program is run before a simulation. The SCIO subsystem is the executive driver for the simulator. It accepts user input, initializes parameters, controls simulation, and generates output data files and simulation status display. The TM subsystem models the spacecraft dynamics, sensors, and actuators. It accepts ephemerides, star data, and environmental torques from the Standalone Profile Program. With these and actuator commands from the OBC subsystem, the TM subsystem propagates the current state of the spacecraft and generates sensor data for use by the OBC and SCIO subsystems. The OBC subsystem uses sensor data from the TM subsystem, a Kalman filter (for attitude determination), and control laws to compute actuator commands to the TM subsystem. The OBC subsystem also provides output data to the SCIO subsystem for output to the analysts. The Postprocessor Program is run after simulation is completed. It generates printer and CRT plots and tabular reports of the simulated data at the direction of the user. GROSS is written in FORTRAN 77 and ASSEMBLER and has been implemented on a VAX 11/780 under VMS 4.5. It has a virtual memory requirement of 255k. GROSS was developed in 1986.

  4. Java-based Graphical User Interface for MAVERIC-II

    NASA Technical Reports Server (NTRS)

    Seo, Suk Jai

    2005-01-01

    A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.

  5. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  6. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  7. Speed and Accuracy of Rapid Speech Output by Adolescents with Residual Speech Sound Errors Including Rhotics

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Edwards, Mary Louise

    2009-01-01

    Children with residual speech sound errors are often underserved clinically, yet there has been a lack of recent research elucidating the specific deficits in this population. Adolescents aged 10-14 with residual speech sound errors (RE) that included rhotics were compared to normally speaking peers on tasks assessing speed and accuracy of speech…

  8. Influence of Current Input-Output and Age of First Exposure on Phonological Acquisition in Early Bilingual Spanish-English-Speaking Kindergarteners

    ERIC Educational Resources Information Center

    Ruiz-Felter, Roxanna; Cooperson, Solaman J.; Bedore, Lisa M.; Peña, Elizabeth D.

    2016-01-01

    Background: Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. Aims: To investigate the influence of age of first exposure to English and the amount of current…

  9. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  10. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  11. Impacts of land use/cover classification accuracy on regional climate simulations

    NASA Astrophysics Data System (ADS)

    Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.

    2007-03-01

    Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.

  12. A new topology and control method for electromagnetic transmitter power supplies

    NASA Astrophysics Data System (ADS)

    Zhang, Yiming; Zhang, Jialin; Yuan, Dakang

    2017-04-01

    As essential equipment for electromagnetic exploration, electromagnetic transmitter reverse the steady power supply with desired frequency and transmit the power through grounding electrodes. To obtain effective geophysical data during deep exploration, the transmitter needs to be high-voltage, high-current, with high-accuracy output, and yet compact and light. The researches on the power supply technologies for high-voltage high-power electromagnetic transmitter is of significant importance to the deep geophysical explorations. Therefore, the performance of electromagnetic transmitter is mainly subject to the following two aspects: the performance of emission current and voltage, and the power density. These requirements bring technical difficulties to the development of power supplies. Conventionally, high-frequency switching power supplies are applied in the design of a high-power transmitter power supply. However, the structure of the topology is complicate, which may reduce the controllability of the output voltage and the reliability of the system. Without power factor control, the power factor of the structure is relatively low. Moreover high switching frequency causes high loss. With the development of the PWM (pulse width modulation) technique, its merits of simple structure, low loss, convenient control and unit power factor have made it popular in electrical energy feedback, active filter, and power factor compensation. Studies have shown that using PWM converters and space vector modulation have become the trend in designing transmitter power supply. However, the earth load exhibits different impedances at different frequencies. Thus ensuing high-accuracy and a stable output from a transmitter power supply in harsh environment has become a key topic in the design of geophysical exploration instruments. Based on SVPWM technology, an electromagnetic transmitter power supply has been designed and its control strategy has been studied. The transmitting system is composed of power supply, SVPWM converter, and power inverter units. The functions of the units are as follows: (1) power supply: a generator providing power with three phase; (2) SVPWM converter: convert AC to DC output; (3) power inverter unit: the inverter is used to convert DC to AC output whose frequency, amplitude and waveform are variable. In the SVPWM technique, the active current and the reactive current are controlled separately, and each variable is analyzed individually, thus the power factor of the system is improved. Through controlling the PWM converter at the generation side, we can get any power factor. Usually the power factor of the generation side is set to 1. Finally, simulation and experimental results validate both the correctness of the established model and the effectiveness of the control method. We can acquire unity power factor for the input and steady current for the output. They also demonstrated that the electromagnetic transmitter power supply designed in this study can meet the practical needs of field geological exploration. We can improve the utilization of the transmitter system.

  13. Simulation of changes in heavy metal contamination in farmland soils of a typical manufacturing center through logistic-based cellular automata modeling.

    PubMed

    Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming

    2016-01-01

    A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios <1) included elevation, distance from the road, distance from the key polluting enterprises, distance from the town center, soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.

  14. Reconstruction of Long-Term Discharge Data in a Snow Dominant Region considering Uncertainty in Snow Measurement

    NASA Astrophysics Data System (ADS)

    Kim, S.

    2016-12-01

    This study to improve the accuracy of discharge simulation at the head water of the Tone River Basin (Yagisawa Dam Basin; 167 km2 and Naramata Dam Basin; 67 km2), Japan, where the river discharge is governed by the snowmelt and thus much uncertainty was originated in our previous study (Kim et al, 2011). To decrease the uncertainty in our hydrological modeling and simulation, snowmelt amounts are estimated rigorously using an improved degree-day method. The degree-day method, which is the simplest method to estimate snowmelt, is adopted with an improved degree-day factor estimation method. The degree-day factor for the target area is estimated using the observed temperature and the observed river discharge of the snowmelt season. Using long-term observed data, the unique relationship between the degree-day factor and temperature are extracted, and the estimated degree-day factor as a function of temperature is applied for the winter season discharge simulation. Rainfall-runoff simulation for the rest of season is done by the kinematic wave model based on the stage-discharge relationship, considering surface-subsurface flow generation. Finally, long-term (1979-2008) simulation output for the dam inflow is reconstructed and compared with the observed one. ( Kim, S., Tachikawa, Y., Nakakita, E., Yorozu, K. and Shiiba, M. 2011. Climate change impact on river flow of the Tone river basin, Japan, Annual Journal of Hydraulic Engneering, JSCE, 55:S_85-S_90.)

  15. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  16. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  17. Simulation of the Impact of New Aircraft- and Satellite-based Ocean Surface Wind Measurements on Estimates of Hurricane Intensity

    NASA Technical Reports Server (NTRS)

    Uhlhorn, Eric; Atlas, Robert; Black, Peter; Buckley, Courtney; Chen, Shuyi; El-Nimri, Salem; Hood, Robbie; Johnson, James; Jones, Linwood; Miller, Timothy; hide

    2009-01-01

    The Hurricane Imaging Radiometer (HIRAD) is a new airborne microwave remote sensor currently under development to enhance real-time hurricane ocean surface wind observations. HIRAD builds on the capabilities of the Stepped Frequency Microwave Radiometer (SFMR), which now operates on NOAA P-3, G-4, and AFRC C-130 aircraft. Unlike the SFMR, which measures wind speed and rain rate along the ground track directly beneath the aircraft, HIRAD will provide images of the surface wind and rain field over a wide swath (approximately 3 times the aircraft altitude). To demonstrate potential improvement in the measurement of peak hurricane winds, we present a set of Observing System Simulation Experiments (OSSEs) in which measurements from the new instrument as well as those from existing platforms (air, surface, and space-based) are simulated from the output of a high-resolution (approximately 1.7 km) numerical model. Simulated retrieval errors due to both instrument noise as well as model function accuracy are considered over the expected range of incidence angles, wind speeds and rain rates. Based on numerous simulated flight patterns and data source combinations, statistics are developed to describe relationships between the observed and true (from the model s perspective) peak wind speed. These results have implications for improving the estimation of hurricane intensity (as defined by the peak sustained wind anywhere in the storm), which may often go un-observed due to sampling limitations.

  18. Digital computer simulation of inductor-energy-storage dc-to-dc converters with closed-loop regulators

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.; Owen, H. A.; Wilson, T. G.; Rodriguez, G. E.

    1974-01-01

    The simulation of converter-controller combinations by means of a flexible digital computer program which produces output to a graphic display is discussed. The procedure is an alternative to mathematical analysis of converter systems. The types of computer programming involved in the simulation are described. Schematic diagrams, state equations, and output equations are displayed for four basic forms of inductor-energy-storage dc to dc converters. Mathematical models are developed to show the relationship of the parameters.

  19. User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model

    NASA Technical Reports Server (NTRS)

    Paul, D. D., Jr.

    1972-01-01

    The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.

  20. AP-IO: asynchronous pipeline I/O for hiding periodic output cost in CFD simulation.

    PubMed

    Xiaoguang, Ren; Xinhai, Xu

    2014-01-01

    Computational fluid dynamics (CFD) simulation often needs to periodically output intermediate results to files in the form of snapshots for visualization or restart, which seriously impacts the performance. In this paper, we present asynchronous pipeline I/O (AP-IO) optimization scheme for the periodically snapshot output on the basis of asynchronous I/O and CFD application characteristics. In AP-IO, dedicated background I/O processes or threads are in charge of handling the file write in pipeline mode, therefore the write overhead can be hidden with more calculation than classic asynchronous I/O. We design the framework of AP-IO and implement it in OpenFOAM, providing CFD users with a user-friendly interface. Experimental results on the Tianhe-2 supercomputer demonstrate that AP-IO can achieve a good optimization effect for the periodical snapshot output in CFD application, and the effect is especially better for massively parallel CFD simulations, which can reduce the total execution time up to about 40%.

  1. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  2. AP-IO: Asynchronous Pipeline I/O for Hiding Periodic Output Cost in CFD Simulation

    PubMed Central

    Xiaoguang, Ren; Xinhai, Xu

    2014-01-01

    Computational fluid dynamics (CFD) simulation often needs to periodically output intermediate results to files in the form of snapshots for visualization or restart, which seriously impacts the performance. In this paper, we present asynchronous pipeline I/O (AP-IO) optimization scheme for the periodically snapshot output on the basis of asynchronous I/O and CFD application characteristics. In AP-IO, dedicated background I/O processes or threads are in charge of handling the file write in pipeline mode, therefore the write overhead can be hidden with more calculation than classic asynchronous I/O. We design the framework of AP-IO and implement it in OpenFOAM, providing CFD users with a user-friendly interface. Experimental results on the Tianhe-2 supercomputer demonstrate that AP-IO can achieve a good optimization effect for the periodical snapshot output in CFD application, and the effect is especially better for massively parallel CFD simulations, which can reduce the total execution time up to about 40%. PMID:24955390

  3. Real-time simulation of an F110/STOVL turbofan engine

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.; Ouzts, Peter J.

    1989-01-01

    A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.

  4. TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-04-01

    Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.

  5. Underwater terrain-aided navigation system based on combination matching algorithm.

    PubMed

    Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao

    2018-07-01

    Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.

  6. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  7. Computer simulation and design of a three degree-of-freedom shoulder module

    NASA Technical Reports Server (NTRS)

    Marco, David; Torfason, L.; Tesar, Delbert

    1989-01-01

    An in-depth kinematic analysis of a three degree of freedom fully-parallel robotic shoulder module is presented. The major goal of the analysis is to determine appropriate link dimensions which will provide a maximized workspace along with desirable input to output velocity and torque amplification. First order kinematic influence coefficients which describe the output velocity properties in terms of actuator motions provide a means to determine suitable geometric dimensions for the device. Through the use of computer simulation, optimal or near optimal link dimensions based on predetermined design criteria are provided for two different structural designs of the mechanism. The first uses three rotational inputs to control the output motion. The second design involves the use of four inputs, actuating any three inputs for a given position of the output link. Alternative actuator placements are examined to determine the most effective approach to control the output motion.

  8. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  9. Improved Accuracy of Automated Estimation of Cardiac Output Using Circulation Time in Patients with Heart Failure.

    PubMed

    Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi

    2016-11-01

    Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. E-beam generated holographic masks for optical vector-matrix multiplication

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Case, S. K.

    1981-01-01

    An optical vector matrix multiplication scheme that encodes the matrix elements as a holographic mask consisting of linear diffraction gratings is proposed. The binary, chrome on glass masks are fabricated by e-beam lithography. This approach results in a fairly simple optical system that promises both large numerical range and high accuracy. A partitioned computer generated hologram mask was fabricated and tested. This hologram was diagonally separated outputs, compact facets and symmetry about the axis. The resultant diffraction pattern at the output plane is shown. Since the grating fringes are written at 45 deg relative to the facet boundaries, the many on-axis sidelobes from each output are seen to be diagonally separated from the adjacent output signals.

  11. Multivariable control of a forward swept wing aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Quinn, W. W.

    1986-01-01

    The impact of independent canard and flaperon control of the longitudinal axis of a generic forward swept wing aircraft is examined. The Linear Quadratic Gaussian (LQG)/Loop Transfer Recovery (LTR) method is used to design three compensators: two single-input-single-output (SISO) systems, one with angle of attack as output and canard as control, the other with pitch attitude as output and canard as control, and a two-input-two-output system with both canard and flaperon controlling both the pitch attitude and angle of attack. The performances of the three systems are compared showing the addition of flaperon control allows the aircraft to perform in the precision control modes with very little loss of command following accuracy.

  12. Grid-based Meteorological and Crisis Applications

    NASA Astrophysics Data System (ADS)

    Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin

    2010-05-01

    We present several applications from domain of meteorology and crisis management we developed and/or plan to develop. Particularly, we present IMS Model Suite - a complex software system designed to address the needs of accurate forecast of weather and hazardous weather phenomena, environmental pollution assessment, prediction of consequences of nuclear accident and radiological emergency. We discuss requirements on computational means and our experiences how to meet them by grid computing. The process of a pollution assessment and prediction of the consequences in case of radiological emergence results in complex data-flows and work-flows among databases, models and simulation tools (geographical databases, meteorological and dispersion models, etc.). A pollution assessment and prediction requires running of 3D meteorological model (4 nests with resolution from 50 km to 1.8 km centered on nuclear power plant site, 38 vertical levels) as well as running of the dispersion model performing the simulation of the release transport and deposition of the pollutant with respect to the numeric weather prediction data, released material description, topography, land use description and user defined simulation scenario. Several post-processing options can be selected according to particular situation (e.g. doses calculation). Another example is a forecasting of fog as one of the meteorological phenomena hazardous to the aviation as well as road traffic. It requires complicated physical model and high resolution meteorological modeling due to its dependence on local conditions (precise topography, shorelines and land use classes). An installed fog modeling system requires a 4 time nested parallelized 3D meteorological model with 1.8 km horizontal resolution and 42 levels vertically (approx. 1 million points in 3D space) to be run four times daily. The 3D model outputs and multitude of local measurements are utilized by SPMD-parallelized 1D fog model run every hour. The fog forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.

  13. Influence of simulation parameters on the speed and accuracy of Monte Carlo calculations using PENEPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2018-01-01

    The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.

  14. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  15. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  16. Inexpensive Pyranometer

    NASA Technical Reports Server (NTRS)

    Yanow, Gilbert

    1996-01-01

    Pyranometer generates output potential of about 300 mV in maximum sunlight. Designed to monitor insolation at accuracy within 5 percent of accuracy of instruments ordinarily used for this purpose. Suitable for use in school laboratories and perhaps in commercial facilities where expense of more precise instrument not justified. Slightly more complex pyranometer intended primarily for use in agricultural setting described in "Inexpensive Meter For Total Solar Radiation" (NPO-16741).

  17. SU-F-T-479: Estimation of the Accuracy in Respiratory-Gated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurosawa, T; Miyakawa, S; Sato, M

    Purpose: Irregular respiratory patterns affects dose outputs in respiratorygated radiotherapy and there is no commercially available quality assurance (QA) system for it. We designed and developed a patient specific QA system for respiratory-gated radiotherapy to estimate irradiated output. Methods: Our in-house QA system for gating was composed of a personal computer with the USB-FSIO electronic circuit connecting to the linear accelerator (ONCOR-K, Toshiba Medical Systems). The linac implements a respiratory gating system (AZ-733V, Anzai Medical). During the beam was on, 4.2 V square-wave pulses were continually sent to the system. Our system can receive and count the pulses. At first,more » our system and an oscilloscope were compared to check the performance of our system. Next, basic estimation models were generated when ionization-chamber measurements were performed in gating using regular sinusoidal wave patterns (2.0, 2.5, 4.0, 8.0, 15 sec/cycle). During gated irradiation with the regular patterns, the number of the pulses per one gating window was measured using our system. Correlation between the number of the pulses per one gating and dose per the gating window were assessed to generate the estimation model. Finally, two irregular respiratory patterns were created and the accuracy of the estimation was evaluated. Results: Compared to the oscilloscope, our system worked similarly. The basic models were generated with the accuracy within 0.1%. The results of the gated irradiations with two irregular respiratory patterns show good agreement within 0.4% estimation accuracy. Conclusion: Our developed system shows good estimation for even irregular respiration patterns. The system would be a useful tool to verify the output for respiratory-gated radiotherapy.« less

  18. Neural-Network Approach to Hyperspectral Data Analysis for Volcanic Ash Clouds Monitoring

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Ventress, Lucy; Carboni, Elisa; Grainger, Roy Gordon; Del Frate, Fabio

    2015-11-01

    In this study three artificial neural networks (ANN) were implemented in order to emulate a retrieval model and to estimate the ash Aerosol optical Depth (AOD), particle effective radius (reff) and cloud height from volcanic eruption using hyperspectral remotely sensed data. ANNs were trained using a selection of Infrared Atmospheric Sounding Interferometer (IASI) channels in Thermal Infrared (TIR) as inputs, and the corresponding ash parameters retrieved obtained using the Oxford retrievals as target outputs. The retrieval is demonstrated for the eruption of the Eyjafjallajo ̈kull volcano (Iceland) occurred in 2010. The results of validation provided root mean square error (RMSE) values between neural network outputs and targets lower than standard deviation (STD) of corresponding target outputs, therefore demonstrating the feasibility to estimate volcanic ash parameters using an ANN approach, and its importance in near real time monitoring activities, owing to its fast application. A high accuracy has been achieved for reff and cloud height estimation, while a decreasing in accuracy was obtained when applying the NN approach for AOD estimation, in particular for those values not well characterized during NN training phase.

  19. Deep learning based hand gesture recognition in complex scenes

    NASA Astrophysics Data System (ADS)

    Ni, Zihan; Sang, Nong; Tan, Cheng

    2018-03-01

    Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.

  20. A novel optical fiber displacement sensor of wider measurement range based on neural network

    NASA Astrophysics Data System (ADS)

    Guo, Yuan; Dai, Xue Feng; Wang, Yu Tian

    2006-02-01

    By studying on the output characteristics of random type optical fiber sensor and semicircular type optical fiber sensor, the ratio of the two output signals was used as the output signal of the whole system. Then the measurement range was enlarged, the linearity was improved, and the errors of reflective and absorbent changing of target surface are automatically compensated. Meantime, an optical fiber sensor model of correcting static error based on BP artificial neural network(ANN) is set up. So the intrinsic errors such as effects of fluctuations in the light, circuit excursion, the intensity losses in the fiber lines and the additional losses in the receiving fiber caused by bends are eliminated. By discussing in theory and experiment, the error of nonlinear is 2.9%, the measuring range reaches to 5-6mm and the relative accuracy is 2%.And this sensor has such characteristics as no electromagnetic interference, simple construction, high sensitivity, good accuracy and stability. Also the multi-point sensor system can be used to on-line and non-touch monitor in working locales.

  1. Extracting atomic numbers and electron densities from a dual source dual energy CT scanner: experiments and a simulation model.

    PubMed

    Landry, Guillaume; Reniers, Brigitte; Granton, Patrick Vincent; van Rooijen, Bart; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank

    2011-09-01

    Dual energy CT (DECT) imaging can provide both the electron density ρ(e) and effective atomic number Z(eff), thus facilitating tissue type identification. This paper investigates the accuracy of a dual source DECT scanner by means of measurements and simulations. Previous simulation work suggested improved Monte Carlo dose calculation accuracy when compared to single energy CT for low energy photon brachytherapy, but lacked validation. As such, we aim to validate our DECT simulation model in this work. A cylindrical phantom containing tissue mimicking inserts was scanned with a second generation dual source scanner (SOMATOM Definition FLASH) to obtain Z(eff) and ρ(e). A model of the scanner was designed in ImaSim, a CT simulation program, and was used to simulate the experiment. Accuracy of measured Z(eff) (labelled Z) was found to vary from -10% to 10% from low to high Z tissue substitutes while the accuracy on ρ(e) from DECT was about 2.5%. Our simulation reproduced the experiments within ±5% for both Z and ρ(e). A clinical DECT scanner was able to extract Z and ρ(e) of tissue substitutes. Our simulation tool replicates the experiments within a reasonable accuracy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Design and simulation of a sub-terahertz folded-waveguide extended interaction oscillator

    NASA Astrophysics Data System (ADS)

    Liu, Wenxin; Zhang, Zhaochuan; Zhao, Chao; Guo, Xin; Liao, Suying

    2017-06-01

    In this paper, an interesting type of a two-section folded wave-guide (TSFW) slow wave structure (SWS) for the development of sub-Terahertz (sub-THz) extended interaction oscillator (EIO) is proposed. In this sub-THz device, the prebunching electron beam is produced by the TSFW SWS, which results in the enhancement of the output power. To verify this concept, the TSFW for sub-THz EIO is developed, which includes the design, simulation, and some fabrications. A small size of electron optics system (EOS), the TSFW SWS for beam-wave interactions, and the output structure are studied with simulations. Through the codes Egun and Superfish, the EOS is designed and optimized. With a help of CST studio and 3D particle-in-cell (PIC) simulation CHIPIC, the characteristics of beam-wave interaction generated by the TSFW are studied. The results of PIC simulation show that the output power is remarkably enhanced by a factor of 3, which exceeds 200 W at the frequency of 108 GHz. Based on the optimum parameters, the TSFW is manufactured with a high speed numerical mill, and the test transmission characteristic |S21| is 13 dB. At last, the output structure with a pill-box window is optimized, fabricated, integrated, and tested, and the result shows that the voltage standing-wave ratio of the window is about 2.2 at an operating frequency of 108 GHz. This design and simulation can provide an effective method to develop high power THz sources.

  3. Accuracy in inference of nursing diagnoses in heart failure patients.

    PubMed

    Pereira, Juliana de Melo Vellozo; Cavalcanti, Ana Carla Dantas; Lopes, Marcos Venícios de Oliveira; da Silva, Valéria Gonçalves; de Souza, Rosana Oliveira; Gonçalves, Ludmila Cuzatis

    2015-01-01

    Heart failure (HF) is a common cause of hospitalization and requires accuracy in clinical judgment and appropriate nursing diagnoses. to determine the accuracy of nursing diagnoses of fatigue, intolerance to activity and decreased cardiac output in hospitalized HF patients. descriptive study applied to nurses with experience in NANDA-I and/or HF nursing diagnoses. Evaluation and accuracy were determined by calculating efficacy (E), false negative (FN), false positive (FP) and trend (T) measures. Nurses who showed acceptable inspection for two diagnoses were selected. the nursing diagnosis of fatigue was the most commonly mistaken diagnosis identified by the nursing evaluators. the search for improving diagnostic accuracy reaffirms the need for continuous and specific training to improve the diagnosis capability of nurses. the training allowed the exercise of clinical judgment and better accuracy of nurses.

  4. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    PubMed

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  5. Non-parametric identification of multivariable systems: A local rational modeling approach with application to a vibration isolation benchmark

    NASA Astrophysics Data System (ADS)

    Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom

    2018-05-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.

  6. Model-Observation "Data Cubes" for the DOE Atmospheric Radiation Measurement Program's LES ARM Symbiotic Simulation and Observation (LASSO) Workflow

    NASA Astrophysics Data System (ADS)

    Vogelmann, A. M.; Gustafson, W. I., Jr.; Toto, T.; Endo, S.; Cheng, X.; Li, Z.; Xiao, H.

    2015-12-01

    The Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facilities' Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) Workflow is currently being designed to provide output from routine LES to complement its extensive observations. The modeling portion of the LASSO workflow is presented by Gustafson et al., which will initially focus on shallow convection over the ARM megasite in Oklahoma, USA. This presentation describes how the LES output will be combined with observations to construct multi-dimensional and dynamically consistent "data cubes", aimed at providing the best description of the atmospheric state for use in analyses by the community. The megasite observations are used to constrain large-eddy simulations that provide a complete spatial and temporal coverage of observables and, further, the simulations also provide information on processes that cannot be observed. Statistical comparisons of model output with their observables are used to assess the quality of a given simulated realization and its associated uncertainties. A data cube is a model-observation package that provides: (1) metrics of model-observation statistical summaries to assess the simulations and the ensemble spread; (2) statistical summaries of additional model property output that cannot be or are very difficult to observe; and (3) snapshots of the 4-D simulated fields from the integration period. Searchable metrics are provided that characterize the general atmospheric state to assist users in finding cases of interest, such as categorization of daily weather conditions and their specific attributes. The data cubes will be accompanied by tools designed for easy access to cube contents from within the ARM archive and externally, the ability to compare multiple data streams within an event as well as across events, and the ability to use common grids and time sampling, where appropriate.

  7. The GOES-R Rebroadcast (GRB) Data Stream Simulator

    NASA Astrophysics Data System (ADS)

    Dittberner, G. J.; Gibbons, K.; Czopkiewicz, E.; Miller, C.; Brown-Bergtold, B.; Haman, B.; Marley, S.

    2013-12-01

    GOES Rebroadcast (GRB) signals in the GOES-R era will replace the current legacy GOES Variable (GVAR) signal and will have substantially different characteristics, including a change in data rate from a single 2.1 Mbps stream to two digital streams of 15.5 Mbps each. Five GRB Simulators were developed as portable systems that output a high-fidelity stream of Consultative Committee for Space Data Systems (CCSDS) formatted GRB packet data equivalent to live GRB data. The data are used for on-site testing of user ingest and data handling systems known as field terminal sites. The GRB Simulator is a fully self-contained system which includes all software and hardware units needed for operation. The operator manages configurations to edit preferences, define individual test scenarios, and manage event logs and reports. Simulations are controlled by test scenarios, which are scripts that specify the test data and provide a series of actions for the GRB Simulator to perform when generating GRB output. Scenarios allow for the insertion of errors or modification of GRB packet headers for testing purposes. The GRB Simulator provides a built-in editor for managing scenarios. The GRB Simulator provides GRB data as either baseband (digital) or Intermediate Frequency (IF) output to the test system. GRB packet data are sent in the same two output streams used in the operational system: one for Left Hand Circular Polarization (LHCP) and one for Right Hand Circular Polarization (RHCP). Use of circular polarization in the operational system allows the transmitting antenna to multiplex the two digital streams into the same signal, thereby doubling the available bandwidth. The GRB Simulator is designed to be used at sites that receive the GRB downlink.

  8. Pediatric Disaster Triage: Multiple Simulation Curriculum Improves Prehospital Care Providers' Assessment Skills.

    PubMed

    Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc

    2017-01-01

    Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10% improvement in simulated triage accuracy. Future iterations of the curriculum can target greater improvements in EMT triage accuracy.

  9. Parcels v0.9: prototyping a Lagrangian ocean analysis framework for the petascale age

    NASA Astrophysics Data System (ADS)

    Lange, Michael; van Sebille, Erik

    2017-11-01

    As ocean general circulation models (OGCMs) move into the petascale age, where the output of single simulations exceeds petabytes of storage space, tools to analyse the output of these models will need to scale up too. Lagrangian ocean analysis, where virtual particles are tracked through hydrodynamic fields, is an increasingly popular way to analyse OGCM output, by mapping pathways and connectivity of biotic and abiotic particulates. However, the current software stack of Lagrangian ocean analysis codes is not dynamic enough to cope with the increasing complexity, scale and need for customization of use-cases. Furthermore, most community codes are developed for stand-alone use, making it a nontrivial task to integrate virtual particles at runtime of the OGCM. Here, we introduce the new Parcels code, which was designed from the ground up to be sufficiently scalable to cope with petascale computing. We highlight its API design that combines flexibility and customization with the ability to optimize for HPC workflows, following the paradigm of domain-specific languages. Parcels is primarily written in Python, utilizing the wide range of tools available in the scientific Python ecosystem, while generating low-level C code and using just-in-time compilation for performance-critical computation. We show a worked-out example of its API, and validate the accuracy of the code against seven idealized test cases. This version 0.9 of Parcels is focused on laying out the API, with future work concentrating on support for curvilinear grids, optimization, efficiency and at-runtime coupling with OGCMs.

  10. MPI-AMRVAC 2.0 for Solar and Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Xia, C.; Teunissen, J.; El Mellah, I.; Chané, E.; Keppens, R.

    2018-02-01

    We report on the development of MPI-AMRVAC version 2.0, which is an open-source framework for parallel, grid-adaptive simulations of hydrodynamic and magnetohydrodynamic (MHD) astrophysical applications. The framework now supports radial grid stretching in combination with adaptive mesh refinement (AMR). The advantages of this combined approach are demonstrated with one-dimensional, two-dimensional, and three-dimensional examples of spherically symmetric Bondi accretion, steady planar Bondi–Hoyle–Lyttleton flows, and wind accretion in supergiant X-ray binaries. Another improvement is support for the generic splitting of any background magnetic field. We present several tests relevant for solar physics applications to demonstrate the advantages of field splitting on accuracy and robustness in extremely low-plasma β environments: a static magnetic flux rope, a magnetic null-point, and magnetic reconnection in a current sheet with either uniform or anomalous resistivity. Our implementation for treating anisotropic thermal conduction in multi-dimensional MHD applications is also described, which generalizes the original slope-limited symmetric scheme from two to three dimensions. We perform ring diffusion tests that demonstrate its accuracy and robustness, and show that it prevents the unphysical thermal flux present in traditional schemes. The improved parallel scaling of the code is demonstrated with three-dimensional AMR simulations of solar coronal rain, which show satisfactory strong scaling up to 2000 cores. Other framework improvements are also reported: the modernization and reorganization into a library, the handling of automatic regression tests, the use of inline/online Doxygen documentation, and a new future-proof data format for input/output.

  11. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  12. Nebulisation of corticosteroid suspensions and solutions with a beta(2) agonist.

    PubMed

    O'Callaghan, Christopher L; White, Judy A; Jackson, Judith M; Barry, Peter W; Kantar, Ahmad

    2008-05-01

    The aim of this study was to determine the output of salbutamol nebulised in combination with either flunisolide or beclometasone dipropionate (BDP) from two different nebulisers under simulated breathing conditions. The BimboNeb and Nebula nebulisers were used to nebulise 3.0 mL of the two drug mixtures (salbutamol, 5000 microg plus either flunisolide, 600 microg, or BDP, 800 microg). Particle size was determined by inertial impaction. Total outputs of all drugs from both nebulisers were measured using a sinus flow pump under simulated paediatric and adult breathing patterns. The mass median aerodynamic diameter (MMAD) of BDP particles from the mixture was 6.34 mum using the BimboNeb and 5.34 mum using the Nebula. Values for salbutamol in this mixture were 3.93 and 3.32 microm, respectively. The MMAD of flunisolide particles from the BimboNeb and Nebula were 3.74 and 3.65 microm, respectively, while for salbutamol were 3.79 and 3.74 microm, respectively. With the simulated adult breathing pattern, all drug outputs from both mixtures were greater from the BimboNeb than from the Nebula after 5 and 10 min' nebulisation. Drug delivery from the BimboNeb, but not the Nebula, was affected by the simulated breathing pattern. Outputs with the BimboNeb were lower with the paediatric breathing pattern than with the adult pattern. In the majority of cases, nebulising for 10 min produced significantly greater drug output than after 5 min. For the Nebula, outputs were generally similar at 5 and 10 min, irrespective of the breathing pattern. These results highlight the need to assess the amount of aerosolised drug available when drugs are combined, when different nebulisers are used and when they are used with patients of different ages.

  13. Climate impacts on palm oil yields in the Nigerian Niger Delta

    NASA Astrophysics Data System (ADS)

    Okoro, Stanley U.; Schickhoff, Udo; Boehner, Juergen; Schneider, Uwe A.; Huth, Neil

    2016-04-01

    Palm oil production has increased in recent decades and is estimated to increase further. The optimal role of palm oil production, however, is controversial because of resource conflicts with alternative land uses. Local conditions and climate change affect resource competition and the desirability of palm oil production. Based on this, crop yield simulations using different climate model output under different climate scenarios could be important tool in addressing the problem of uncertainty quantification among different climate model outputs. Previous studies on this region have focused mostly on single experimental fields, not considering variations in Agro-Ecological Zones, climatic conditions, varieties and management practices and, in most cases not extending to various IPCC climate scenarios and were mostly based on single climate model output. Furthermore, the uncertainty quantification of the climate- impact model has rarely been investigated on this region. To this end we use the biophysical simulation model APSIM (Agricultural Production Systems Simulator) to simulate the regional climate impact on oil palm yield over the Nigerian Niger Delta. We also examine whether the use of crop yield model output ensemble reduces the uncertainty rather than the use of climate model output ensemble. The results could serve as a baseline for policy makers in this region in understanding the interaction between potentials of energy crop production of the region as well as its food security and other negative feedbacks that could be associated with bioenergy from oil palm. Keywords: Climate Change, Climate impacts, Land use and Crop yields.

  14. First-Order-hold interpolation digital-to-analog converter with application to aircraft simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, W. B.

    1976-01-01

    Those who design piloted aircraft simulations must contend with the finite size and speed of the available digital computer and the requirement for simulation reality. With a fixed computational plant, the more complex the model, the more computing cycle time is required. While increasing the cycle time may not degrade the fidelity of the simulated aircraft dynamics, the larger steps in the pilot cue feedback variables (such as the visual scene cues), may be disconcerting to the pilot. The first-order-hold interpolation (FOHI) digital-to-analog converter (DAC) is presented as a device which offers smooth output, regardless of cycle time. The Laplace transforms of these three conversion types are developed and their frequency response characteristics and output smoothness are compared. The FOHI DAC exhibits a pure one-cycle delay. Whenever the FOHI DAC input comes from a second-order (or higher) system, a simple computer software technique can be used to compensate for the DAC phase lag. When so compensated, the FOHI DAC has (1) an output signal that is very smooth, (2) a flat frequency response in frequency ranges of interest, and (3) no phase error. When the input comes from a first-order system, software compensation may cause the FOHI DAC to perform as an FOHE DAC, which, although its output is not as smooth as that of the FOHI DAC, has a smoother output than that of the ZOH DAC.

  15. Integrated Flight/Structural Mode Control for Very Flexible Aircraft Using L1 Adaptive Output Feedback Controller

    NASA Technical Reports Server (NTRS)

    Che, Jiaxing; Cao, Chengyu; Gregory, Irene M.

    2012-01-01

    This paper explores application of adaptive control architecture to a light, high-aspect ratio, flexible aircraft configuration that exhibits strong rigid body/flexible mode coupling. Specifically, an L(sub 1) adaptive output feedback controller is developed for a semi-span wind tunnel model capable of motion. The wind tunnel mount allows the semi-span model to translate vertically and pitch at the wing root, resulting in better simulation of an aircraft s rigid body motion. The control objective is to design a pitch control with altitude hold while suppressing body freedom flutter. The controller is an output feedback nominal controller (LQG) augmented by an L(sub 1) adaptive loop. A modification to the L(sub 1) output feedback is proposed to make it more suitable for flexible structures. The new control law relaxes the required bounds on the unmatched uncertainty and allows dependence on the state as well as time, i.e. a more general unmatched nonlinearity. The paper presents controller development and simulated performance responses. Simulation is conducted by using full state flexible wing models derived from test data at 10 different dynamic pressure conditions. An L(sub 1) adaptive output feedback controller is designed for a single test point and is then applied to all the test cases. The simulation results show that the L(sub 1) augmented controller can stabilize and meet the performance requirements for all 10 test conditions ranging from 30 psf to 130 psf dynamic pressure.

  16. WebbPSF: Updated PSF Models Based on JWST Ground Testing Results

    NASA Astrophysics Data System (ADS)

    Osborne, Shannon; Perrin, Marshall D.; Melendez Hernandez, Marcio

    2018-06-01

    WebbPSF is a widely-used package that allows astronomers to create simulated point spread functions (PSFs) for the James Webb Space Telescope (JWST). WebbPSF provides the user with the flexibility to produce PSFs for direct imaging and coronographic modes, for a range of filters and masks, and across all the JWST instruments. These PSFs can then be analyzed with built-in evaluation tools or can be output to be used with users’ own tools. In the most recent round of updates, the accuracy of the PSFs have been improved with updated analyses of the instrument test data from NASA Goddard and with the new data from the testing of the combined Optical Telescope Element and Integrated Science Instrument Module (OTIS) at NASA Johnson. A post-processing function applying detector effects and pupil distortions to input PSFs has also been added to the WebbPSF package.

  17. Estimating a Markovian Epidemic Model Using Household Serial Interval Data from the Early Phase of an Epidemic

    PubMed Central

    Black, Andrew J.; Ross, Joshua V.

    2013-01-01

    The clinical serial interval of an infectious disease is the time between date of symptom onset in an index case and the date of symptom onset in one of its secondary cases. It is a quantity which is commonly collected during a pandemic and is of fundamental importance to public health policy and mathematical modelling. In this paper we present a novel method for calculating the serial interval distribution for a Markovian model of household transmission dynamics. This allows the use of Bayesian MCMC methods, with explicit evaluation of the likelihood, to fit to serial interval data and infer parameters of the underlying model. We use simulated and real data to verify the accuracy of our methodology and illustrate the importance of accounting for household size. The output of our approach can be used to produce posterior distributions of population level epidemic characteristics. PMID:24023679

  18. Effect of the influence function of deformable mirrors on laser beam shaping.

    PubMed

    González-Núñez, Héctor; Béchet, Clémentine; Ayancán, Boris; Neichel, Benoit; Guesalaga, Andrés

    2017-02-20

    The continuous membrane stiffness of a deformable mirror propagates the deformation of the actuators beyond their neighbors. When phase-retrieval algorithms are used to determine the desired shape of these mirrors, this cross-coupling-also known as influence function (IF)-is generally disregarded. We study this problem via simulations and bench tests for different target shapes to gain further insight into the phenomenon. Sound modeling of the IF effect is achieved as highlighted by the concurrence between the modeled and experimental results. In addition, we observe that the actuators IF is a key parameter that determines the accuracy of the output light pattern. Finally, it is shown that in some cases it is possible to achieve better shaping by modifying the input irradiance of the phase-retrieval algorithm. The results obtained from this analysis open the door to further improvements in this type of beam-shaping systems.

  19. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    PubMed

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Spectral behavior of integrated optics asymmetric y-junction used for optimizing a planar optics telescope beam combiner

    NASA Astrophysics Data System (ADS)

    Schanen-Duport, Isabelle; Persegol, Dominique; Collomb, Virginie; Minier, Vincent; Haguenauer, Pierre

    2017-11-01

    Astronomical aperture synthesis requires to combine beams coming from telescopes, with constraints on mechanical and thermal stability, accuracy on the measurement of the interferences visibility. One adapted way for solving the problem is integrated planar optics. A first two telescope beam combiner made by ion exchange technique on glass substrate and build with symmetric Y-junction provides laboratory white light interferograms simultaneously with photometric calibration. In order to increase the interferometric signal without loss of photometric output, we propose to replace symmetric Y-junctions by asymmetric ones. In this paper, we report the conception, the manufacturing and the characterization of asymmetric Y-junction realized by ion exchange on glass substrate. The specific application of astronomical interferometry required the characterization of such component in term of spectral behavior, so we report the simulation and the measurement of asymmetric Y-junction response versus wavelength.

  1. The recovery of weak impulsive signals based on stochastic resonance and moving least squares fitting.

    PubMed

    Jiang, Kuosheng; Xu, Guanghua; Liang, Lin; Tao, Tangfei; Gu, Fengshou

    2014-07-29

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.

  2. Evaluation of SMAP Level 2 Soil Moisture Algorithms Using SMOS Data

    NASA Technical Reports Server (NTRS)

    Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann; Shi, J. C.

    2011-01-01

    The objectives of the SMAP (Soil Moisture Active Passive) mission are global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolution, respectively. SMAP will provide soil moisture with a spatial resolution of 10 km with a 3-day revisit time at an accuracy of 0.04 m3/m3 [1]. In this paper we contribute to the development of the Level 2 soil moisture algorithm that is based on passive microwave observations by exploiting Soil Moisture Ocean Salinity (SMOS) satellite observations and products. SMOS brightness temperatures provide a global real-world, rather than simulated, test input for the SMAP radiometer-only soil moisture algorithm. Output of the potential SMAP algorithms will be compared to both in situ measurements and SMOS soil moisture products. The investigation will result in enhanced SMAP pre-launch algorithms for soil moisture.

  3. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  4. Influence of the level of description of the indoor environment on the characteristic parameters of a MIMO channel

    NASA Astrophysics Data System (ADS)

    Pereira, Carlos; Chartois, Yannick; Pousset, Yannis; Vauzelle, Rodolphe

    2006-09-01

    Modelling of the environment is an important factor in electromagnetic wave propagation simulation, performed by a 3D ray-tracing method. The aim of this work is to study the effect of indoor environment modelling accuracy on MIMO (Multiple Input Multiple Output) channel characterisation. The first of the two environments investigated is the hall of our building, while the second one is a more confined environment and represents the floor of our laboratory. For these two indoor environments, three description levels are proposed in order to establish geometrical and electrical modelling impact on MIMO channel characterisation. Results are obtained by analysing the capacity and variation in correlation in relation to the polarisation, the presence of LOS (Line of sight) or NLOS configurations, the spacing between antennae and the number of transmitter and receiver antennae. To cite this article: C. Pereira et al., C. R. Physique 7 (2006).

  5. Design and evaluation of a GaAs MMIC X-band active RC quadrature power divider

    NASA Astrophysics Data System (ADS)

    Henkus, J. C.

    1991-03-01

    The design and evaluation of a GaAs MMIC (Microwave Monolithic Integrated Circuit) X-band active RC Quadrature Power Divider (QPD) is addressed. This QPD can be used as part of a vector modulator. The chosen QPD topology consists of two active first order RC all pass networks and was converted into an MMIC design. The design is completely symmetrical except for two key resistors. On-wafer S parameter measurements were carried out; a special probe head configuration was composed in order to avoid measurement accuracy degradation associated with the reversal of the active output of the QPD. The measured nominal RF behavior of the chips complies with the simulated behavior to a very high degree. The optical, DC, and RF yield is very large (97, 83, and 74 percent respectively). A modification to Takashi's all pass network was proposed which offers gain/frequency slope control and compensation ability.

  6. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  7. The Harp probe - An in situ Bragg scattering sensor

    NASA Technical Reports Server (NTRS)

    Mollo-Christensen, E.; Huang, N. E.; Long, S. R.; Bliven, L. F.

    1984-01-01

    A wave sensor, consisting of parallel, evenly spaced capacitance wires, whose output is the sum of the water surface deflections at the wires, has been built and tested in a wave tank. The probe output simulates Bragg scattering of electromagnetic waves from a water surface with waves; it can be used to simulate electromagnetic probing of the sea surface by radar. The study establishes that the wave probe, called the 'Harp' for short, will simulate Bragg scattering and that it can also be used to study nonlinear wave processes.

  8. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  9. The MICE grand challenge lightcone simulation - I. Dark matter clustering

    NASA Astrophysics Data System (ADS)

    Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.

    2015-04-01

    We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.

  10. Study of electrode slice forming of bicycle dynamo hub power connector

    NASA Astrophysics Data System (ADS)

    Chen, Dyi-Cheng; Jao, Chih-Hsuan

    2013-12-01

    Taiwan's bicycle industry has been an international reputation as bicycle kingdom, but the problem in the world makes global warming green energy rise, the development of electrode slice of hub dynamo and power output connector to bring new hope to bike industry. In this study connector power output to gather public opinion related to patent, basis of collected documents as basis for design, structural components in least drawn to power output with simple connector. Power output of this study objectives connector hope at least cost, structure strongest, highest efficiency in output performance characteristics such as use of computer-aided drawing software Solid works to establish power output connector parts of 3D model, the overall portfolio should be considered part types including assembly ideas, weather resistance, water resistance, corrosion resistance to vibration and power flow stability. Moreover the 3D model import computer-aided finite element analysis software simulation of expected the power output of the connector parts manufacturing process. A series of simulation analyses, in which the variables relied on first stage and second stage forming, were run to examine the effective stress, effective strain, press speed, and die radial load distribution when forming electrode slice of bicycle dynamo hub.

  11. The novel application of Benford's second order analysis for monitoring radiation output in interventional radiology.

    PubMed

    Cournane, S; Sheehy, N; Cooke, J

    2014-06-01

    Benford's law is an empirical observation which predicts the expected frequency of digits in naturally occurring datasets spanning multiple orders of magnitude, with the law having been most successfully applied as an audit tool in accountancy. This study investigated the sensitivity of the technique in identifying system output changes using simulated changes in interventional radiology Dose-Area-Product (DAP) data, with any deviations from Benford's distribution identified using z-statistics. The radiation output for interventional radiology X-ray equipment is monitored annually during quality control testing; however, for a considerable portion of the year an increased output of the system, potentially caused by engineering adjustments or spontaneous system faults may go unnoticed, leading to a potential increase in the radiation dose to patients. In normal operation recorded examination radiation outputs vary over multiple orders of magnitude rendering the application of normal statistics ineffective for detecting systematic changes in the output. In this work, the annual DAP datasets complied with Benford's first order law for first, second and combinations of the first and second digits. Further, a continuous 'rolling' second order technique was devised for trending simulated changes over shorter timescales. This distribution analysis, the first employment of the method for radiation output trending, detected significant changes simulated on the original data, proving the technique useful in this case. The potential is demonstrated for implementation of this novel analysis for monitoring and identifying change in suitable datasets for the purpose of system process control. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Operational performance of a low cost, air mass 2 solar simulator

    NASA Technical Reports Server (NTRS)

    Yass, K.; Curtis, H. B.

    1975-01-01

    Modifications and improvements on a low cost air mass 2 solar simulator are discussed. The performance characteristics of total irradiance, uniformity of irradiance, spectral distribution, and beam subtense angle are presented. The simulator consists of an array of tungsten halogen lamps hexagonally spaced in a plane. A corresponding array of plastic Fresnel lenses shapes the output beam such that the simulator irradiates a 1.2 m by 1.2 m area with uniform collimated irradiance. Details are given concerning individual lamp output measurements and placement of the lamps. Originally, only the direct component of solar irradiance was simulated. Since the diffuse component may affect the performance of some collectors, the capability to simulate it is being added. An approach to this diffuse addition is discussed.

  13. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  14. Application of a stochastic snowmelt model for probabilistic decisionmaking

    NASA Technical Reports Server (NTRS)

    Mccuen, R. H.

    1983-01-01

    A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.

  15. Laboratory Performance Evaluation Report of SEL 421 Phasor Measurement Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; faris, Anthony J.; Martin, Kenneth E.

    2007-12-01

    PNNL and BPA have been in close collaboration on laboratory performance evaluation of phasor measurement units for over ten years. A series of evaluation tests are designed to confirm accuracy and determine measurement performance under a variety of conditions that may be encountered in actual use. Ultimately the testing conducted should provide parameters that can be used to adjust all measurements to a standardized basis. These tests are performed with a standard relay test set using recorded files of precisely generated test signals. The test set provides test signals at a level and in a format suitable for input tomore » a PMU that accurately reproduces the signals in both signal amplitude and timing. Test set outputs are checked to confirm the accuracy of the output signal. The recorded signals include both current and voltage waveforms and a digital timing track used to relate the PMU measured value with the test signal. Test signals include steady-state waveforms to test amplitude, phase, and frequency accuracy, modulated signals to determine measurement and rejection bands, and step tests to determine timing and response accuracy. Additional tests are included as necessary to fully describe the PMU operation. Testing is done with a BPA phasor data concentrator (PDC) which provides communication support and monitors data input for dropouts and data errors.« less

  16. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  17. A hardware-in-the-loop simulation program for ground-based radar

    NASA Astrophysics Data System (ADS)

    Lam, Eric P.; Black, Dennis W.; Ebisu, Jason S.; Magallon, Julianna

    2011-06-01

    A radar system created using an embedded computer system needs testing. The way to test an embedded computer system is different from the debugging approaches used on desktop computers. One way to test a radar system is to feed it artificial inputs and analyze the outputs of the radar. More often, not all of the building blocks of the radar system are available to test. This will require the engineer to test parts of the radar system using a "black box" approach. A common way to test software code on a desktop simulation is to use breakpoints so that is pauses after each cycle through its calculations. The outputs are compared against the values that are expected. This requires the engineer to use valid test scenarios. We will present a hardware-in-the-loop simulator that allows the embedded system to think it is operating with real-world inputs and outputs. From the embedded system's point of view, it is operating in real-time. The hardware in the loop simulation is based on our Desktop PC Simulation (PCS) testbed. In the past, PCS was used for ground-based radars. This embedded simulation, called Embedded PCS, allows a rapid simulated evaluation of ground-based radar performance in a laboratory environment.

  18. A circuit-based photovoltaic module simulator with shadow and fault settings

    NASA Astrophysics Data System (ADS)

    Chao, Kuei-Hsiang; Chao, Yuan-Wei; Chen, Jyun-Ping

    2016-03-01

    The main purpose of this study was to develop a photovoltaic (PV) module simulator. The proposed simulator, using electrical parameters from solar cells, could simulate output characteristics not only during normal operational conditions, but also during conditions of partial shadow and fault conditions. Such a simulator should possess the advantages of low cost, small size and being easily realizable. Experiments have shown that results from a proposed PV simulator of this kind are very close to that from simulation software during partial shadow conditions, and with negligible differences during fault occurrence. Meanwhile, the PV module simulator, as developed, could be used on various types of series-parallel connections to form PV arrays, to conduct experiments on partial shadow and fault events occurring in some of the modules. Such experiments are designed to explore the impact of shadow and fault conditions on the output characteristics of the system as a whole.

  19. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE PAGES

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    2017-03-20

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  20. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  1. Derivation of global vegetation biophysical parameters from EUMETSAT Polar System

    NASA Astrophysics Data System (ADS)

    García-Haro, Francisco Javier; Campos-Taberner, Manuel; Muñoz-Marí, Jordi; Laparra, Valero; Camacho, Fernando; Sánchez-Zapero, Jorge; Camps-Valls, Gustau

    2018-05-01

    This paper presents the algorithm developed in LSA-SAF (Satellite Application Facility for Land Surface Analysis) for the derivation of global vegetation parameters from the AVHRR (Advanced Very High Resolution Radiometer) sensor on board MetOp (Meteorological-Operational) satellites forming the EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Polar System (EPS). The suite of LSA-SAF EPS vegetation products includes the leaf area index (LAI), the fractional vegetation cover (FVC), and the fraction of absorbed photosynthetically active radiation (FAPAR). LAI, FAPAR, and FVC characterize the structure and the functioning of vegetation and are key parameters for a wide range of land-biosphere applications. The algorithm is based on a hybrid approach that blends the generalization capabilities offered by physical radiative transfer models with the accuracy and computational efficiency of machine learning methods. One major feature is the implementation of multi-output retrieval methods able to jointly and more consistently estimate all the biophysical parameters at the same time. We propose a multi-output Gaussian process regression (GPRmulti), which outperforms other considered methods over PROSAIL (coupling of PROSPECT and SAIL (Scattering by Arbitrary Inclined Leaves) radiative transfer models) EPS simulations. The global EPS products include uncertainty estimates taking into account the uncertainty captured by the retrieval method and input errors propagation. A sensitivity analysis is performed to assess several sources of uncertainties in retrievals and maximize the positive impact of modeling the noise in training simulations. The paper discusses initial validation studies and provides details about the characteristics and overall quality of the products, which can be of interest to assist the successful use of the data by a broad user's community. The consistent generation and distribution of the EPS vegetation products will constitute a valuable tool for monitoring of earth surface dynamic processes.

  2. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  3. Surrogate modeling of deformable joint contact using artificial neural networks.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2015-09-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks

    PubMed Central

    Eskinazi, Ilan; Fregly, Benjamin J.

    2016-01-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591

  5. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  6. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  7. Multi-class biological tissue classification based on a multi-classifier: Preliminary study of an automatic output power control for ultrasonic surgical units.

    PubMed

    Youn, Su Hyun; Sim, Taeyong; Choi, Ahnryul; Song, Jinsung; Shin, Ki Young; Lee, Il Kwon; Heo, Hyun Mu; Lee, Daeweon; Mun, Joung Hwan

    2015-06-01

    Ultrasonic surgical units (USUs) have the advantage of minimizing tissue damage during surgeries that require tissue dissection by reducing problems such as coagulation and unwanted carbonization, but the disadvantage of requiring manual adjustment of power output according to the target tissue. In order to overcome this limitation, it is necessary to determine the properties of in vivo tissues automatically. We propose a multi-classifier that can accurately classify tissues based on the unique impedance of each tissue. For this purpose, a multi-classifier was built based on single classifiers with high classification rates, and the classification accuracy of the proposed model was compared with that of single classifiers for various electrode types (Type-I: 6 mm invasive; Type-II: 3 mm invasive; Type-III: surface). The sensitivity and positive predictive value (PPV) of the multi-classifier by cross checks were determined. According to the 10-fold cross validation results, the classification accuracy of the proposed model was significantly higher (p<0.05 or <0.01) than that of existing single classifiers for all electrode types. In particular, the classification accuracy of the proposed model was highest when the 3mm invasive electrode (Type-II) was used (sensitivity=97.33-100.00%; PPV=96.71-100.00%). The results of this study are an important contribution to achieving automatic optimal output power adjustment of USUs according to the properties of individual tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Examining impulse-variability in overarm throwing.

    PubMed

    Urbin, M A; Stodden, David; Boros, Rhonda; Shannon, David

    2012-01-01

    The purpose of this study was to examine variability in overarm throwing velocity and spatial output error at various percentages of maximum to test the prediction of an inverted-U function as predicted by impulse-variability theory and a speed-accuracy trade-off as predicted by Fitts' Law Thirty subjects (16 skilled, 14 unskilled) were instructed to throw a tennis ball at seven percentages of their maximum velocity (40-100%) in random order (9 trials per condition) at a target 30 feet away. Throwing velocity was measured with a radar gun and interpreted as an index of overall systemic power output. Within-subject throwing velocity variability was examined using within-subjects repeated-measures ANOVAs (7 repeated conditions) with built-in polynomial contrasts. Spatial error was analyzed using mixed model regression. Results indicated a quadratic fit with variability in throwing velocity increasing from 40% up to 60%, where it peaked, and then decreasing at each subsequent interval to maximum (p < .001, η2 = .555). There was no linear relationship between speed and accuracy. Overall, these data support the notion of an inverted-U function in overarm throwing velocity variability as both skilled and unskilled subjects approach maximum effort. However, these data do not support the notion of a speed-accuracy trade-off. The consistent demonstration of an inverted-U function associated with systemic power output variability indicates an enhanced capability to regulate aspects of force production and relative timing between segments as individuals approach maximum effort, even in a complex ballistic skill.

  9. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  10. Simulation of medical Q-switch flash-pumped Er:YAG laser

    NASA Astrophysics Data System (ADS)

    -Yan-lin, Wang; Huang-Chuyun; Yao-Yucheng; Xiaolin, Zou

    2011-01-01

    Er: YAG laser, the wavelength is 2940nm, can be absorbed strongly by water. The absorption coefficient is as high as 13000 cm-1. As the water strong absorption, Erbium laser can bring shallow penetration depth and smaller surrounding tissue injury in most soft tissue and hard tissue. At the same time, the interaction between 2940nm radiation and biological tissue saturated with water is equivalent to instantaneous heating within limited volume, thus resulting in the phenomenon of micro-explosion to removal organization. Different parameters can be set up to cut enamel, dentin, caries and soft tissue. For the development and optimization of laser system, it is a practical choice to use laser modeling to predict the influence of various parameters for laser performance. Aim at the status of low Erbium laser output power, flash-pumped Er: YAG laser performance was simulated to obtain optical output in theory. the rate equation model was obtained and used to predict the change of population densities in various manifolds and use the technology of Q-switch the simulate laser output for different design parameters and results showed that Er: YAG laser output energy can achieve the maximum average output power of 9.8W under the given parameters. The model can be used to find the potential laser systems that meet application requirements.

  11. Integrating pixel- and polygon-based approaches to wildfire risk assessment: Application to a high-value watershed on the Pike and San Isabel National Forests, Colorado, USA

    Treesearch

    Matthew P. Thompson; Julie W. Gilbertson-Day; Joe H. Scott

    2015-01-01

    We develop a novel risk assessment approach that integrates complementary, yet distinct, spatial modeling approaches currently used in wildfire risk assessment. Motivation for this work stems largely from limitations of existing stochastic wildfire simulation systems, which can generate pixel-based outputs of fire behavior as well as polygon-based outputs of simulated...

  12. Analytic model for academic research productivity having factors, interactions and implications

    PubMed Central

    2011-01-01

    Financial support is dear in academia and will tighten further. How can the research mission be accomplished within new restraints? A model is presented for evaluating source components of academic research productivity. It comprises six factors: funding; investigator quality; efficiency of the research institution; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Their interactions produce output and patterned influences between factors. Strategies for optimizing output are enabled. PMID:22130145

  13. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  14. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, K; Yu, Z; Chen, H

    Purpose: To implement VMAT in RayStation with the Elekta Synergy linac with the new Agility MLC, and to utilize the same vendor softwares to determine the optimum Elekta VMAT machine parameters in RayStation for accurate modeling and robust delivery. Methods: iCOMCat is utilized to create various beam patterns with user defined dose rate, gantry, MLC and jaw speed for each control point. The accuracy and stability of the output and beam profile are qualified for each isolated functional component of VMAT delivery using ion chamber and Profiler2 with isocentric mounting fixture. Service graphing on linac console is used to verifymore » the mechanical motion accuracy. The determined optimum Elekta VMAT machine parameters were configured in RayStation v4.5.1. To evaluate the system overall performance, TG-119 test cases and nine retrospective VMAT patients were planned on RayStation, and validated using both ArcCHECK (with plug and ion chamber) and MapCHECK2. Results: Machine output and profile varies <0.3% when only variable is dose rate (35MU/min-600MU/min). <0.9% output and <0.3% profile variation are observed with additional gantry motion (0.53deg/s–5.8deg/s both directions). The output and profile variation are still <1% with additional slow leaf motion (<1.5cm/s both direction). However, the profile becomes less symmetric, and >1.5% output and 7% profile deviation is seen with >2.5cm/s leaf motion. All clinical cases achieved comparable plan quality as treated IMRT plans. The gamma passing rate is 99.5±0.5% on ArcCheck (<3% iso center dose deviation) and 99.1±0.8% on MapCheck2 using 3%/3mm gamma (10% lower threshold). Mechanical motion accuracy in all VMAT deliveries is <1°/1mm. Conclusion: Accurate RayStation modeling and robust VMAT delivery is achievable on Elekta Agility for <2.5cm/s leaf motion and full range of dose rate and gantry speed determined by the same vendor softwares. Our TG-119 and patient results have provided us with the confidence to use VMAT clinically.« less

  16. Development and validation of equations utilizing lamb vision system output to predict lamb carcass fabrication yields.

    PubMed

    Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C

    2004-07-01

    This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.

  17. The light output and the detection efficiency of the liquid scintillator EJ-309.

    PubMed

    Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G

    2014-07-01

    The light output response and the neutron and gamma-ray detection efficiency are determined for liquid scintillator EJ-309. The light output function is compared to those of previous studies. Experimental efficiency results are compared to predictions from GEANT4, MCNPX and PENELOPE Monte Carlo simulations. The differences associated with the use of different light output functions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Time-dependent multi-dimensional simulation studies of the electron output scheme for high power FELs

    NASA Astrophysics Data System (ADS)

    Hahn, S. J.; Fawley, W. M.; Kim, K. J.; Edighoffer, J. A.

    1994-12-01

    The authors examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. The authors have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.

  19. Bayesian History Matching of Complex Infectious Disease Models Using Emulation: A Tutorial and a Case Study on HIV in Uganda

    PubMed Central

    Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.

    2015-01-01

    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850

  20. Dual Brushless Resolver Rate Sensor

    NASA Technical Reports Server (NTRS)

    Howard, David E. (Inventor)

    1997-01-01

    A resolver rate sensor is disclosed in which dual brushless resolvers are mechanically coupled to the same output shaft. Diverse inputs are provided to each resolver by providing the first resolver with a DC input and the second resolver with an AC sinusoidal input. A trigonometric identity in which the sum of the squares of the sin and cosine components equal one is used to advantage in providing a sensor of increased accuracy. The first resolver may have a fixed or variable DC input to permit dynamic adjustment of resolver sensitivity thus permitting a wide range of coverage. In one embodiment of the invention the outputs of the first resolver are directly inputted into two separate multipliers and the outputs of the second resolver are inputted into the two separate multipliers, after being demodulated in a pair of demodulator circuits. The multiplied signals are then added in an adder circuit to provide a directional sensitive output. In another embodiment the outputs from the first resolver is modulated in separate modulator circuits and the output from the modulator circuits are used to excite the second resolver. The outputs from the second resolver are demodulated in separate demodulator circuit and added in an adder circuit to provide a direction sensitive rate output.

Top