DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, He; Lv, Hongliang; Guo, Hui, E-mail: hguan@stu.xidian.edu.cn
2015-11-21
Impact ionization affects the radio-frequency (RF) behavior of high-electron-mobility transistors (HEMTs), which have narrow-bandgap semiconductor channels, and this necessitates complex parameter extraction procedures for HEMT modeling. In this paper, an enhanced small-signal equivalent circuit model is developed to investigate the impact ionization, and an improved method is presented in detail for direct extraction of intrinsic parameters using two-step measurements in low-frequency and high-frequency regimes. The practicability of the enhanced model and the proposed direct parameter extraction method are verified by comparing the simulated S-parameters with published experimental data from an InAs/AlSb HEMT operating over a wide frequency range. The resultsmore » demonstrate that the enhanced model with optimal intrinsic parameter values that were obtained by the direct extraction approach can effectively characterize the effects of impact ionization on the RF performance of HEMTs.« less
NASA Technical Reports Server (NTRS)
Dudkin, V. E.; Kovalev, E. E.; Nefedov, N. A.; Antonchik, V. A.; Bogdanov, S. D.; Kosmach, V. F.; Likhachev, A. YU.; Benton, E. V.; Crawford, H. J.
1995-01-01
A method is proposed for finding the dependence of mean multiplicities of secondaries on the nucleus-collision impact parameter from the data on the total interaction ensemble. The impact parameter has been shown to completely define the mean characteristics of an individual interaction event. A difference has been found between experimental results and the data calculated in terms of the cascade-evaporation model at impact-parameter values below 3 fm.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Molybdenum electron impact width parameter measurement by laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Sternberg, E. M. A.; Rodrigues, N. A. S.; Amorim, J.
2016-01-01
In this work, we suggest a method for electron impact width parameter calculation based on Stark broadening of emission lines of a laser-ablated plasma plume. First, electron density and temperature must be evaluated by means of the Saha-Boltzmann plot method for neutral and ionized species of the plasma. The method was applied for laser-ablated molybdenum plasma plume. For molybdenum plasma electron temperature, which varies around 10,000 K, and electron density, which reaches values around 1018 cm-3, and considering that total measured line broadening was due experimental and Stark broadening mainly, electron impact width parameter of molybdenum emission lines was determined as (0.01 ± 0.02) nm. Intending to validate the presented method, it was analyzed the laser-ablated aluminum plasma plume and the obtained results were in agreement with the predicted on the literature.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Study on validation method for femur finite element model under multiple loading conditions
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu
2018-03-01
Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
The multistate impact parameter method for molecular charge exchange in nitrogen
NASA Technical Reports Server (NTRS)
Ioup, J. W.
1980-01-01
The multistate impact parameter method is applied to the calculation of total cross sections for low energy change transfer between nitrogen ions and nitrogen molecules. Experimental data showing the relationships between total cross section and ion energy for various pressures and electron ionization energies were obtained. Calculated and experimental cross section values from the work are compared with the experimental and theoretical results of other investigators.
Sideways fall-induced impact force and its effect on hip fracture risk: a review.
Nasiri Sarvi, M; Luo, Y
2017-10-01
Osteoporotic hip fracture, mostly induced in falls among the elderly, is a major health burden over the world. The impact force applied to the hip is an important factor in determining the risk of hip fracture. However, biomechanical researches have yielded conflicting conclusions about whether the fall-induced impact force can be accurately predicted by the available models. It also has been debated whether or not the effect of impact force has been considered appropriately in hip fracture risk assessment tools. This study aimed to provide a state-of-the-art review of the available methods for predicting the impact force, investigate their strengths/limitations, and suggest further improvements in modeling of human body falling. We divided the effective parameters on impact force to two categories: (1) the parameters that can be determined subject-specifically and (2) the parameters that may significantly vary from fall to fall for an individual and cannot be considered subject-specifically. The parameters in the first category can be investigated in human body fall experiments. Video capture of real-life falls was reported as a valuable method to investigate the parameters in the second category that significantly affect the impact force and cannot be determined in human body fall experiments. The analysis of the gathered data revealed that there is a need to develop modified biomechanical models for more accurate prediction of the impact force and appropriately adopt them in hip fracture risk assessment tools in order to achieve a better precision in identifying high-risk patients. Graphical abstract Impact force to the hip induced in sideways falls is affected by many parameters and may remarkably vary from subject to subject.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Environmental impact assessment of coal power plants in operation
NASA Astrophysics Data System (ADS)
Bartan, Ayfer; Kucukali, Serhat; Ar, Irfan
2017-11-01
Coal power plants constitute an important component of the energy mix in many countries. However, coal power plants can cause several environmental risks such as: climate change and biodiversity loss. In this study, a tool has been proposed to calculate the environmental impact of a coal-fired thermal power plant in operation by using multi-criteria scoring and fuzzy logic method. We take into account the following environmental parameters in our tool: CO, SO2, NOx, particulate matter, fly ash, bottom ash, the cooling water intake impact on aquatic biota, and the thermal pollution. In the proposed tool, the boundaries of the fuzzy logic membership functions were established taking into account the threshold values of the environmental parameters which were defined in the environmental legislation. Scoring of these environmental parameters were done with the statistical analysis of the environmental monitoring data of the power plant and by using the documented evidences that were obtained during the site visits. The proposed method estimates each environmental impact factor level separately and then aggregates them by calculating the Environmental Impact Score (EIS). The proposed method uses environmental monitoring data and documented evidence instead of using simulation models. The proposed method has been applied to the 4 coal-fired power plants that have been operation in Turkey. The Environmental Impact Score was obtained for each power plant and their environmental performances were compared. It is expected that those environmental impact assessments will contribute to the decision-making process for environmental investments to those plants. The main advantage of the proposed method is its flexibility and ease of use.
Whitney, Jon; Carswell, William; Rylander, Nichole
2013-06-01
Predictions of injury in response to photothermal therapy in vivo are frequently made using Arrhenius parameters obtained from cell monolayers exposed to laser or water bath heating. However, the impact of different heating methods and cellular microenvironments on Arrhenius predictions has not been thoroughly investigated. This study determined the influence of heating method (water bath and laser irradiation) and cellular microenvironment (cell monolayers and tissue phantoms) on Arrhenius parameters and spatial viability. MDA-MB-231 cells seeded in monolayers and sodium alginate phantoms were heated with a water bath for 3-20 min at 46, 50, and 54 °C or laser irradiated (wavelength of 1064 nm and fluences of 40 W/cm(2) or 3.8 W/cm(2) for 0-4 min) in combination with photoabsorptive carbon nanohorns. Spatial viability was measured using digital image analysis of cells stained with calcein AM and propidium iodide and used to determine Arrhenius parameters. The influence of microenvironment and heating method on Arrhenius parameters and capability of parameters derived from more simplistic experimental conditions (e.g. water bath heating of monolayers) to predict more physiologically relevant systems (e.g. laser heating of phantoms) were assessed. Arrhenius predictions of the treated area (<1% viable) under-predicted the measured areas in photothermally treated phantoms by 23 mm(2) using water bath treated cell monolayer parameters, 26 mm(2) using water bath treated phantom parameters, 27 mm(2) using photothermally treated monolayer parameters, and 0.7 mm(2) using photothermally treated phantom parameters. Heating method and cellular microenvironment influenced Arrhenius parameters, with heating method having the greater impact.
The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests is scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low velocity impacts where the mass of the impacter is large and the size of the specimen is small. For large mass impacts of moderately thick (0.35 to 0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large mass test results can be applied directly to other plates of the same size.
The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests are scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low-velocity impacts where the mass of the impacter is large, and the size of the specimen is small. For large-mass impacts of moderately thick (0.35-0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large-mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large-mass test results can be applied directly to other plates of the same thickness.
Parameters estimation of sandwich beam model with rigid polyurethane foam core
NASA Astrophysics Data System (ADS)
Barbieri, Nilson; Barbieri, Renato; Winikes, Luiz Carlos
2010-02-01
In this work, the physical parameters of sandwich beams made with the association of hot-rolled steel, Polyurethane rigid foam and High Impact Polystyrene, used for the assembly of household refrigerators and food freezers are estimated using measured and numeric frequency response functions (FRFs). The mathematical models are obtained using the finite element method (FEM) and the Timoshenko beam theory. The physical parameters are estimated using the amplitude correlation coefficient and genetic algorithm (GA). The experimental data are obtained using the impact hammer and four accelerometers displaced along the sample (cantilevered beam). The parameters estimated are Young's modulus and the loss factor of the Polyurethane rigid foam and the High Impact Polystyrene.
Li, Tongqing; Peng, Yuxing; Zhu, Zhencai; Zou, Shengyong; Yin, Zixin
2017-05-11
Aiming at predicting what happens in reality inside mills, the contact parameters of iron ore particles for discrete element method (DEM) simulations should be determined accurately. To allow the irregular shape to be accurately determined, the sphere clump method was employed in modelling the particle shape. The inter-particle contact parameters were systematically altered whilst the contact parameters between the particle and wall were arbitrarily assumed, in order to purely assess its impact on the angle of repose for the mono-sized iron ore particles. Results show that varying the restitution coefficient over the range considered does not lead to any obvious difference in the angle of repose, but the angle of repose has strong sensitivity to the rolling/static friction coefficient. The impacts of the rolling/static friction coefficient on the angle of repose are interrelated, and increasing the inter-particle rolling/static friction coefficient can evidently increase the angle of repose. However, the impact of the static friction coefficient is more profound than that of the rolling friction coefficient. Finally, a predictive equation is established and a very close agreement between the predicted and simulated angle of repose is attained. This predictive equation can enormously shorten the inter-particle contact parameters calibration time that can help in the implementation of DEM simulations.
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Low Velocity Impact Behavior of Basalt Fiber-Reinforced Polymer Composites
NASA Astrophysics Data System (ADS)
Shishevan, Farzin Azimpour; Akbulut, Hamid; Mohtadi-Bonab, M. A.
2017-06-01
In this research, we studied low velocity impact response of homogenous basalt fiber-reinforced polymer (BFRP) composites and then compared the impact key parameters with carbon fiber-reinforced polymer (CFRP) homogenous composites. BFRPs and CFRPs were fabricated by vacuum-assisted resin transfer molding (VARTM) method. Fabricated composites included 60% fiber and 40% epoxy matrix. Basalt and carbon fibers used as reinforcement materials were weaved in 2/2 twill textile tip in the structures of BFRP and CFRP composites. We also utilized the energy profile method to determine penetration and perforation threshold energies. The low velocity impact tests were carried out in 30, 60, 80, 100, 120 and 160 J energy magnitudes, and impact response of BFRPs was investigated by related force-deflection, force-time, deflection-time and absorbed energy-time graphics. The related impact key parameters such as maximum contact force, absorbed energy, deflection and duration time were compared with CFRPs for various impact energy levels. As a result, due to the higher toughness of basalt fibers, a better low velocity impact performance of BFRP than that of CFRP was observed. The effects of fabrication parameters, such as curing process, were studied on the low velocity impact behavior of BFRP. The results of tested new fabricated materials show that the change of fabrication process and curing conditions improves the impact behavior of BFRPs up to 13%.
ERIC Educational Resources Information Center
Potter, Norman R.; Dieterly, Duncan L.
The literature review was undertaken to establish the current status of the methodology for forecasting and assessing technology and for quantizing human resource parameters with respect to the impact of incoming technologies. The review of 140 selected documents applicable to the study was undertaken with emphasis on the identification of methods…
NASA Astrophysics Data System (ADS)
Van Zeebroeck, M.; Tijskens, E.; Van Liedekerke, P.; Deli, V.; De Baerdemaeker, J.; Ramon, H.
2003-09-01
A pendulum device has been developed to measure contact force, displacement and displacement rate of an impactor during its impact on the sample. Displacement, classically measured by double integration of an accelerometer, was determined in an alternative way using a more accurate incremental optical encoder. The parameters of the Kuwabara-Kono contact force model for impact of spheres have been estimated using an optimization method, taking the experimentally measured displacement, displacement rate and contact force into account. The accuracy of the method was verified using a rubber ball. Contact force parameters for the Kuwabara-Kono model have been estimated with success for three biological materials, i.e., apples, tomatoes and potatoes. The variability in the parameter estimations for the biological materials was quite high and can be explained by geometric differences (radius of curvature) and by biological variation of mechanical tissue properties.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Significance of the impact of motion compensation on the variability of PET image features
NASA Astrophysics Data System (ADS)
Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.
2018-03-01
In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r > 0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Mi-Young; Yoon, Jung-Sik; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr
2015-04-15
The renormalization shielding effects on the electron-impact ionization of hydrogen atom are investigated in dense partially ionized plasmas. The effective projectile-target interaction Hamiltonian and the semiclassical trajectory method are employed to obtain the transition amplitude as well as the ionization probability as functions of the impact parameter, the collision energy, and the renormalization parameter. It is found that the renormalization shielding effect suppresses the transition amplitude for the electron-impact ionization process in dense partially ionized plasmas. It is also found that the renormalization effect suppresses the differential ionization cross section in the peak impact parameter region. In addition, it ismore » found that the influence of renormalization shielding on the ionization cross section decreases with an increase of the relative collision energy. The variations of the renormalization shielding effects on the electron-impact ionization cross section are also discussed.« less
Li, Tongqing; Peng, Yuxing; Zhu, Zhencai; Zou, Shengyong; Yin, Zixin
2017-01-01
Aiming at predicting what happens in reality inside mills, the contact parameters of iron ore particles for discrete element method (DEM) simulations should be determined accurately. To allow the irregular shape to be accurately determined, the sphere clump method was employed in modelling the particle shape. The inter-particle contact parameters were systematically altered whilst the contact parameters between the particle and wall were arbitrarily assumed, in order to purely assess its impact on the angle of repose for the mono-sized iron ore particles. Results show that varying the restitution coefficient over the range considered does not lead to any obvious difference in the angle of repose, but the angle of repose has strong sensitivity to the rolling/static friction coefficient. The impacts of the rolling/static friction coefficient on the angle of repose are interrelated, and increasing the inter-particle rolling/static friction coefficient can evidently increase the angle of repose. However, the impact of the static friction coefficient is more profound than that of the rolling friction coefficient. Finally, a predictive equation is established and a very close agreement between the predicted and simulated angle of repose is attained. This predictive equation can enormously shorten the inter-particle contact parameters calibration time that can help in the implementation of DEM simulations. PMID:28772880
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Parameters for assessing the aquatic environmental impact of cosmetic products.
Vita, N A; Brohem, C A; Canavez, A D P M; Oliveira, C F S; Kruger, O; Lorencini, M; Carvalho, C M
2018-05-01
The cosmetic industry's growing concern about the impact of its supply chain on the environment, sustainability of raw materials, and biodiversity increases the need to ensure that the final product has a lower environmental impact. The objective of this review is to summarize and compare the information available from international organizations and legislation regarding the main criteria used to assess raw materials for aquatic toxicity, as well as the most suitable alternative methods for obtaining assessment parameters. Using the literature available in databases, a review of the scientific literature and international legislation, this work discusses and compares the parameters established by international organizations such as the Environmental Protection Agency (EPA) and Cradle to Cradle (C2C), as well as European legislation, namely, European Regulation 1272/2008, for assessing environmental impact. Defining the ecotoxicity parameters of the main classes of raw materials in rinse-off cosmetic products can enable the development of products that are more environmentally sustainable, prioritizing substances with less environmental impact. Copyright © 2018 Elsevier B.V. All rights reserved.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Investigation of IRT-Based Equating Methods in the Presence of Outlier Common Items
ERIC Educational Resources Information Center
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko
2008-01-01
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Binary logistic regression-Instrument for assessing museum indoor air impact on exhibits.
Bucur, Elena; Danet, Andrei Florin; Lehr, Carol Blaziu; Lehr, Elena; Nita-Lazar, Mihai
2017-04-01
This paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The prediction of the impact on the exhibits during certain pollution scenarios (environmental impact) was calculated by a mathematical model based on the binary logistic regression; it allows the identification of those environmental parameters from a multitude of possible parameters with a significant impact on exhibitions and ranks them according to their severity effect. Air quality (NO 2 , SO 2 , O 3 and PM 2.5 ) and microclimate parameters (temperature, humidity) monitoring data from a case study conducted within exhibition and storage spaces of the Romanian National Aviation Museum Bucharest have been used for developing and validating the binary logistic regression method and the mathematical model. The logistic regression analysis was used on 794 data combinations (715 to develop of the model and 79 to validate it) by a Statistical Package for Social Sciences (SPSS 20.0). The results from the binary logistic regression analysis demonstrated that from six parameters taken into consideration, four of them present a significant effect upon exhibits in the following order: O 3 >PM 2.5 >NO 2 >humidity followed at a significant distance by the effects of SO 2 and temperature. The mathematical model, developed in this study, correctly predicted 95.1 % of the cumulated effect of the environmental parameters upon the exhibits. Moreover, this model could also be used in the decisional process regarding the preventive preservation measures that should be implemented within the exhibition space. The paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The mathematical model developed on the environmental parameters analyzed by the binary logistic regression method could be useful in a decision-making process establishing the best measures for pollution reduction and preventive preservation of exhibits.
An analysis of burn-off impact on the structure microporous of activated carbons formation
NASA Astrophysics Data System (ADS)
Kwiatkowski, Mirosław; Kopac, Türkan
2017-12-01
The paper presents the results on the application of the LBET numerical method as a tool for analysis of the microporous structure of activated carbons obtained from a bituminous coal. The LBET method was employed particularly to evaluate the impact of the burn-off on the obtained microporous structure parameters of activated carbons.
The Impact of Different Missing Data Handling Methods on DINA Model
ERIC Educational Resources Information Center
Sünbül, Seçil Ömür
2018-01-01
In this study, it was aimed to investigate the impact of different missing data handling methods on DINA model parameter estimation and classification accuracy. In the study, simulated data were used and the data were generated by manipulating the number of items and sample size. In the generated data, two different missing data mechanisms…
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Rowson, Steven; Duma, Stefan M
2013-05-01
Recent research has suggested possible long term effects due to repetitive concussions, highlighting the importance of developing methods to accurately quantify concussion risk. This study introduces a new injury metric, the combined probability of concussion, which computes the overall risk of concussion based on the peak linear and rotational accelerations experienced by the head during impact. The combined probability of concussion is unique in that it determines the likelihood of sustaining a concussion for a given impact, regardless of whether the injury would be reported or not. The risk curve was derived from data collected from instrumented football players (63,011 impacts including 37 concussions), which was adjusted to account for the underreporting of concussion. The predictive capability of this new metric is compared to that of single biomechanical parameters. The capabilities of these parameters to accurately predict concussion incidence were evaluated using two separate datasets: the Head Impact Telemetry System (HITS) data and National Football League (NFL) data collected from impact reconstructions using dummies (58 impacts including 25 concussions). Receiver operating characteristic curves were generated, and all parameters were significantly better at predicting injury than random guessing. The combined probability of concussion had the greatest area under the curve for all datasets. In the HITS dataset, the combined probability of concussion and linear acceleration were significantly better predictors of concussion than rotational acceleration alone, but not different from each other. In the NFL dataset, there were no significant differences between parameters. The combined probability of concussion is a valuable method to assess concussion risk in a laboratory setting for evaluating product safety.
NASA Astrophysics Data System (ADS)
Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel
2017-04-01
Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms -1 . We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Watson, Erkai; Steinhauser, Martin O.
2017-01-01
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tocchini-Valentini, Domenico; Barnard, Michael; Bennett, Charles L.
2012-10-01
We present a method to extract the redshift-space distortion {beta} parameter in configuration space with a minimal set of cosmological assumptions. We show that a novel combination of the observed monopole and quadrupole correlation functions can remove efficiently the impact of mild nonlinearities and redshift errors. The method offers a series of convenient properties: it does not depend on the theoretical linear correlation function, the mean galaxy density is irrelevant, only convolutions are used, and there is no explicit dependence on linear bias. Analyses based on dark matter N-body simulations and Fisher matrix demonstrate that errors of a few percentmore » on {beta} are possible with a full-sky, 1 (h {sup -1} Gpc){sup 3} survey centered at a redshift of unity and with negligible shot noise. We also find a baryonic feature in the normalized quadrupole in configuration space that should complicate the extraction of the growth parameter from the linear theory asymptote, but that does not have a major impact on our method.« less
Chawla, A; Mukherjee, S; Karthikeyan, B
2009-02-01
The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.
Determination techniques of Archie’s parameters: a, m and n in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Mohamad, A. M.; Hamada, G. M.
2017-12-01
The determination of water saturation in a heterogeneous reservoir is becoming more challenging, as Archie’s equation is only suitable for clean homogeneous formation and Archie’s parameters are highly dependent on the properties of the rock. This study focuses on the measurement of Archie’s parameters in carbonate and sandstone core samples around Malaysian heterogeneous carbonate and sandstone reservoirs. Three techniques for the determination of Archie’s parameters a, m and n will be implemented: the conventional technique, core Archie parameter estimation (CAPE) and the three-dimensional regression technique (3D). By using the results obtained by the three different techniques, water saturation graphs were produced to observe the symbolic difference of Archie’s parameter and its relevant impact on water saturation values. The difference in water saturation values can be primarily attributed to showing the uncertainty level of Archie’s parameters, mainly in carbonate and sandstone rock samples. It is obvious that the accuracy of Archie’s parameters has a profound impact on the calculated water saturation values in carbonate sandstone reservoirs due to regions of high stress reducing electrical conduction resulting from the raised electrical heterogeneity of the heterogeneous carbonate core samples. Due to the unrealistic assumptions involved in the conventional method, it is better to use either the CAPE or 3D method to accurately determine Archie’s parameters in heterogeneous as well as homogeneous reservoirs.
Symmetric Resonance Charge Exchange Cross Section Based on Impact Parameter Treatment
NASA Technical Reports Server (NTRS)
Omidvar, Kazem; Murphy, Kendrah; Atlas, Robert (Technical Monitor)
2002-01-01
Using a two-state impact parameter approximation, a calculation has been carried out to obtain symmetric resonance charge transfer cross sections between nine ions and their parent atoms or molecules. Calculation is based on a two-dimensional numerical integration. The method is mostly suited for hydrogenic and some closed shell atoms. Good agreement has been obtained with the results of laboratory measurements for the ion-atom pairs H+-H, He+-He, and Ar+-Ar. Several approximations in a similar published calculation have been eliminated.
The unitary convolution approximation for heavy ions
NASA Astrophysics Data System (ADS)
Grande, P. L.; Schiwietz, G.
2002-10-01
The convolution approximation for the impact-parameter dependent energy loss is reviewed with emphasis on the determination of the stopping force for heavy projectiles. In this method, the energy loss in different impact-parameter regions is well determined and interpolated smoothly. The physical inputs of the model are the projectile-screening function (in the case of dressed ions), the electron density and oscillators strengths of the target atoms. Moreover, the convolution approximation, in the perturbative mode (called PCA), yields remarkable agreement with full semi-classical-approximation (SCA) results for bare as well as for screened ions at all impact parameters. In the unitary mode (called UCA), the method contains some higher-order effects (yielding in some cases rather good agreement with full coupled-channel calculations) and approaches the classical regime similar as the Bohr model for large perturbations ( Z/ v≫1). The results are then used to compare with experimental values of the non-equilibrium stopping force as a function of the projectile charge as well as with the equilibrium energy loss under non-aligned and channeling conditions.
NASA Astrophysics Data System (ADS)
Wang, Ting-Ting; Ma, Yu-Gang; Zhang, Chun-Jian; Zhang, Zheng-Qiao
2018-03-01
The proton-proton momentum correlation function from different rapidity regions is systematically investigated for the Au + Au collisions at different impact parameters and different energies from 400 A MeV to 1500 A MeV in the framework of the isospin-dependent quantum molecular dynamics model complemented by the Lednický-Lyuboshitz analytical method. In particular, the in-medium nucleon-nucleon cross-section dependence of the correlation function is brought into focus, while the impact parameter and energy dependence of the momentum correlation function are also explored. The sizes of the emission source are extracted by fitting the momentum correlation functions using the Gaussian source method. We find that the in-medium nucleon-nucleon cross section obviously influences the proton-proton momentum correlation function, which is from the whole-rapidity or projectile or target rapidity region at smaller impact parameters, but there is no effect on the mid-rapidity proton-proton momentum correlation function, which indicates that the emission mechanism differs between projectile or target rapidity and mid-rapidity protons.
A discrete element modelling approach for block impacts on trees
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.
Constraints on the pre-impact orbits of Solar system giant impactors
NASA Astrophysics Data System (ADS)
Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.
2018-03-01
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
Chen, Yushun; Viadero, Roger C; Wei, Xinchao; Fortney, Ronald; Hedrick, Lara B; Welsh, Stuart A; Anderson, James T; Lin, Lian-Shin
2009-01-01
Refining best management practices (BMPs) for future highway construction depends on a comprehensive understanding of environmental impacts from current construction methods. Based on a before-after-control impact (BACI) experimental design, long-term stream monitoring (1997-2006) was conducted at upstream (as control, n = 3) and downstream (as impact, n = 6) sites in the Lost River watershed of the Mid-Atlantic Highlands region, West Virginia. Monitoring data were analyzed to assess impacts of during and after highway construction on 15 water quality parameters and macroinvertebrate condition using the West Virginia stream condition index (WVSCI). Principal components analysis (PCA) identified regional primary water quality variances, and paired t tests and time series analysis detected seven highway construction-impacted water quality parameters which were mainly associated with the second principal component. In particular, impacts on turbidity, total suspended solids, and total iron during construction, impacts on chloride and sulfate during and after construction, and impacts on acidity and nitrate after construction were observed at the downstream sites. The construction had statistically significant impacts on macroinvertebrate index scores (i.e., WVSCI) after construction, but did not change the overall good biological condition. Implementing BMPs that address those construction-impacted water quality parameters can be an effective mitigation strategy for future highway construction in this highlands region.
On the estimation of the reproduction number based on misreported epidemic data.
Azmon, Amin; Faes, Christel; Hens, Niel
2014-03-30
Epidemic data often suffer from underreporting and delay in reporting. In this paper, we investigated the impact of delays and underreporting on estimates of reproduction number. We used a thinned version of the epidemic renewal equation to describe the epidemic process while accounting for the underlying reporting system. Assuming a constant reporting parameter, we used different delay patterns to represent the delay structure in our model. Instead of assuming a fixed delay distribution, we estimated the delay parameters while assuming a smooth function for the reproduction number over time. In order to estimate the parameters, we used a Bayesian semiparametric approach with penalized splines, allowing both flexibility and exact inference provided by MCMC. To show the performance of our method, we performed different simulation studies. We conducted sensitivity analyses to investigate the impact of misspecification of the delay pattern and the impact of assuming nonconstant reporting parameters on the estimates of the reproduction numbers. We showed that, whenever available, additional information about time-dependent underreporting can be taken into account. As an application of our method, we analyzed confirmed daily A(H1N1) v2009 cases made publicly available by the World Health Organization for Mexico and the USA. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
Ephemeral penalty functions for contact-impact dynamics
NASA Technical Reports Server (NTRS)
De La Fuente, Horacio M.; Felippa, Carlos A.
1991-01-01
The use of penalty functions to treat a class of structural contact-impact problems is investigated, with emphasis on ones in which the impact phenomena are primarily nondestructive in nature and in which only the gross characterization of the response is required. The dynamic equations of motion are integrated by the difference method. The penalty is represented as an ephemeral fictitious nonlinear spring that is inserted on anticipation of contact. The magnitude and variation of the penalty force is determined through energy balancing considerations. The 'bell shape' of the penalty force function for positive gap was found to be satisfactory, as it depends on only two parameters that can be directly assigned the physical meaning of force and distance. The determination of force law parameters by energy balance worked well. The incorporation of restitution coefficients by the area balancing method yielded excellent results, and no substantial modifications are anticipated. Extensional penalty springs are obviously sufficient for the simple examples treated.
Impact of various operating modes on performance and emission parameters of small heat source
NASA Astrophysics Data System (ADS)
Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef
2016-06-01
Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
NASA Astrophysics Data System (ADS)
Shah, S.; Hussain, S.; Sagheer, M.
2018-06-01
This article explores the problem of two-dimensional, laminar, steady and boundary layer stagnation point slip flow over a Riga plate. The incompressible upper-convected Maxwell fluid has been considered as a rheological fluid model. The heat transfer characteristics are investigated with generalized Fourier's law. The fluid thermal conductivity is assumed to be temperature dependent in this study. A system of partial differential equations governing the flow of an upper-convected Maxwell fluid, heat and mass transfer using generalized Fourier's law is developed. The main objective of the article is to inspect the impacts of pertinent physical parameters such as the stretching ratio parameter (0 ⩽ A ⩽ 0.3) , Deborah number (0 ⩽ β ⩽ 0.6) , thermal relaxation parameter (0 ⩽ γ ⩽ 0.5) , wall thickness parameter (0.1 ⩽ α ⩽ 3.5) , slip parameter (0 ⩽ R ⩽ 1.5) , thermal conductivity parameter (0.1 ⩽ δ ⩽ 1.0) and modified Hartmann number (0 ⩽ Q ⩽ 3) on the velocity and temperature profiles. Suitable local similarity transformations have been used to get a system of non-linear ODEs from the governing PDEs. The numerical solutions for the dimensionless velocity and temperature distributions have been achieved by employing an effective numerical method called the shooting method. It is seen that the velocity profile shows the reduction in the velocity for the higher values of viscoelastic parameter and the thermal relaxation parameter. In addition, to enhance the reliability at the maximum level of the obtained numerical results by shooting method, a MATLAB built-in solver bvp4c has also been utilized.
Evaluation of trade influence on economic growth rate by computational intelligence approach
NASA Astrophysics Data System (ADS)
Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor
2017-01-01
In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.
Tan, Xia; Ji, Zhong; Zhang, Yadan
2018-04-25
Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.
Welsh, Stuart A.; Chen, Yushun; Viadero, Stuart C.; Wei, Xinchao; Hedrick, Lara B.; Anderson, James T.; Lin, Lian-Shin
2009-01-01
Refining best management practices (BMPs) for future highway construction depends on a comprehensive understanding of environmental impacts from current construction methods. Based on a before-after-control impact (BACI) experimental design, long-term stream monitoring (1997–2006) was conducted at upstream (as control, n = 3) and downstream (as impact, n = 6) sites in the Lost River watershed of the Mid-Atlantic Highlands region, West Virginia. Monitoring data were analyzed to assess impacts of during and after highway construction on 15 water quality parameters and macroinvertebrate condition using the West Virginia stream condition index (WVSCI). Principal components analysis (PCA) identified regional primary water quality variances, and paired t tests and time series analysis detected seven highway construction-impacted water quality parameters which were mainly associated with the second principal component. In particular, impacts on turbidity, total suspended solids, and total iron during construction, impacts on chloride and sulfate during and after construction, and impacts on acidity and nitrate after construction were observed at the downstream sites. The construction had statistically significant impacts on macroinvertebrate index scores (i.e., WVSCI) after construction, but did not change the overall good biological condition. Implementing BMPs that address those construction-impacted water quality parameters can be an effective mitigation strategy for future highway construction in this highlands region.
Coupling SPH and thermochemical models of planets: Methodology and example of a Mars-sized body
NASA Astrophysics Data System (ADS)
Golabek, G. J.; Emsenhuber, A.; Jutzi, M.; Asphaug, E. I.; Gerya, T. V.
2018-02-01
Giant impacts have been suggested to explain various characteristics of terrestrial planets and their moons. However, so far in most models only the immediate effects of the collisions have been considered, while the long-term interior evolution of the impacted planets was not studied. Here we present a new approach, combining 3-D shock physics collision calculations with 3-D thermochemical interior evolution models. We apply the combined methods to a demonstration example of a giant impact on a Mars-sized body, using typical collisional parameters from previous studies. While the material parameters (equation of state, rheology model) used in the impact simulations can have some effect on the long-term evolution, we find that the impact angle is the most crucial parameter for the resulting spatial distribution of the newly formed crust. The results indicate that a dichotomous crustal pattern can form after a head-on collision, while this is not the case when considering a more likely grazing collision. Our results underline that end-to-end 3-D calculations of the entire process are required to study in the future the effects of large-scale impacts on the evolution of planetary interiors.
A numerical calculation method of environmental impacts for the deep sea mining industry - a review.
Ma, Wenbin; van Rhee, Cees; Schott, Dingena
2018-03-01
Since the gradual decrease of mineral resources on-land, deep sea mining (DSM) is becoming an urgent and important emerging activity in the world. However, until now there has been no commercial scale DSM project in progress. Together with the reasons of technological feasibility and economic profitability, the environmental impact is one of the major parameters hindering its industrialization. Most of the DSM environmental impact research focuses on only one particular aspect ignoring that all the DSM environmental impacts are related to each other. The objective of this work is to propose a framework for the numerical calculation methods of the integrated DSM environmental impacts through a literature review. This paper covers three parts: (i) definition and importance description of different DSM environmental impacts; (ii) description of the existing numerical calculation methods for different environmental impacts; (iii) selection of a numerical calculation method based on the selected criteria. The research conducted in this paper provides a clear numerical calculation framework for DSM environmental impact and could be helpful to speed up the industrialization process of the DSM industry.
Impact-parameter dependence of the energy loss of fast molecular clusters in hydrogen
NASA Astrophysics Data System (ADS)
Fadanelli, R. C.; Grande, P. L.; Schiwietz, G.
2008-03-01
The electronic energy loss of molecular clusters as a function of impact parameter is far less understood than atomic energy losses. For instance, there are no analytical expressions for the energy loss as a function of impact parameter for cluster ions. In this work, we describe two procedures to evaluate the combined energy loss of molecules: Ab initio calculations within the semiclassical approximation and the coupled-channels method using atomic orbitals; and simplified models for the electronic cluster energy loss as a function of the impact parameter, namely the molecular perturbative convolution approximation (MPCA, an extension of the corresponding atomic model PCA) and the molecular unitary convolution approximation (MUCA, a molecular extension of the previous unitary convolution approximation UCA). In this work, an improved ansatz for MPCA is proposed, extending its validity for very compact clusters. For the simplified models, the physical inputs are the oscillators strengths of the target atoms and the target-electron density. The results from these models applied to an atomic hydrogen target yield remarkable agreement with their corresponding ab initio counterparts for different angles between cluster axis and velocity direction at specific energies of 150 and 300 keV/u.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Parameters sensitivity on mooring loads of ship-shaped FPSOs
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Saidee
2017-12-01
The work in this paper is focused on special assessment and evaluation of mooring system of ship-shaped FPSO unit. In particular, the purpose of the study is to find the impact on mooring loads for the variation in different parameters using MIMOSA software. First, a selected base case was designed for an intact mooring system in a typical ultimate limit state (ULS) condition, and then the sensitivity to mooring loads on parameters e.g. location of the turret, analysis method (quasi-static vs. dynamic analysis), low-frequency damping level in the surge, pretension and drag coefficients on chain and steel wire has been performed. It is found that mooring loads change due to the change of these parameters. Especially, pretension has a large impact on the maximum tension of mooring lines and low-frequency damping can change surge offset significantly.
Light airplane crash tests at impact velocities of 13 and 27 m/sec
NASA Technical Reports Server (NTRS)
Alfaro-Bou, E.; Vaughan, V. L., Jr.
1977-01-01
Two similar general aviation airplanes were crash tested at the Langley impact dynamics research facility at velocities of 13 and 27 m/sec. Other flight parameters were held constant. The facility, instrumentation, tests specimens, and test method are briefly described. Structural damage and accelerometer data are discussed.
USDA-ARS?s Scientific Manuscript database
Photosynthetic potential in C3 plants is largely limited by CO2 diffusion through stomata (Ls) and mesophyll (Lm) and photo-biochemical (Lb) processes. Accurate estimation of mesophyll conductance (gm) using gas exchange (GE) and chlorophyll fluorescence (CF) parameters of the photosynthetic proces...
Investigation for Molecular Attraction Impact Between Contacting Surfaces in Micro-Gears
NASA Astrophysics Data System (ADS)
Yang, Ping; Li, Xialong; Zhao, Yanfang; Yang, Haiying; Wang, Shuting; Yang, Jianming
2013-10-01
The aim of this research work is to provide a systematic method to perform molecular attraction impact between contacting surfaces in micro-gear train. This method is established by integrating involute profile analysis and molecular dynamics simulation. A mathematical computation of micro-gear involute is presented based on geometrical properties, Taylor expression and Hamaker assumption. In the meantime, Morse potential function and the cut-off radius are introduced with a molecular dynamics simulation. So a hybrid computational method for the Van Der Waals force between the contacting faces in micro-gear train is developed. An example is illustrated to show the performance of this method. The results show that the change of Van Der Waals force in micro-gear train has a nonlinear characteristic with parameters change such as the modulus of the gear and the tooth number of gear etc. The procedure implies a potential feasibility that we can control the Van Der Waals force by adjusting the manufacturing parameters for gear train design.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
A model for explaining fusion suppression using classical trajectory method
NASA Astrophysics Data System (ADS)
Phookan, C. K.; Kalita, K.
2015-01-01
We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.
A comparative study of electrochemical machining process parameters by using GA and Taguchi method
NASA Astrophysics Data System (ADS)
Soni, S. K.; Thomas, B.
2017-11-01
In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.
Analysis and optimization of machining parameters of laser cutting for polypropylene composite
NASA Astrophysics Data System (ADS)
Deepa, A.; Padmanabhan, K.; Kuppan, P.
2017-11-01
Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Tasato, Hiroshi; Kida, Noriyuki
2018-01-01
[Purpose] The purpose of this study was to investigate the measurement method and parameters to simply evaluate the condition of the knee that are necessary for preventing locomotive syndrome as advocated by the Japan Orthopedic Association. [Subjects and Methods] The subjects installed acceleration sensors in lateral condyles of the tibia and measured acceleration and load under the conditions of walking on a flat ground and walking using stairs; the difference between the impulse of impact forces (acceleration × load) of the two knees was defined as a simple evaluation parameter. [Results] Simple evaluation parameters were not correlated with age during walking on a flat ground, but during walking using stairs, it was almost flat up to the age of 20–40 years, and after the age of 49 years, based on the quadratic curve approximation (R2=0.99), a correlation of simple evaluation parameters with age could be confirmed. [Conclusion] The simple evaluation parameter during walking using stairs was highly correlated with age, suggesting a contribution to preventing locomotive syndrome by improving reliability. In the future, we plan to improve reliability by increasing the data, and establish it as a simple evaluation parameter that can be used for preventing locomotive syndrome in elderly people and those with KL classification grades 0–1. PMID:29706699
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan
2004-01-01
The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.
Chen, Y.; Viadero, R.C.; Wei, X.; Fortney, Ronald H.; Hedrick, Lara B.; Welsh, S.A.; Anderson, James T.; Lin, L.-S.
2009-01-01
Refining best management practices (BMPs) for future highway construction depends on a comprehensive understanding of environmental impacts from current construction methods. Based on a before-after-control impact (BACI) experimental design, long-term stream monitoring (1997-2006) was conducted at upstream (as control, n = 3) and downstream (as impact, n = 6) sites in the Lost River watershed of the Mid-Atlantic Highlands region, West Virginia. Monitoring data were analyzed to assess impacts of during and after highway construction on 15 water quality parameters and macroinvertebrate condition using the West Virginia stream condition index (WVSCI). Principal components analysis (PCA) identified regional primary water quality variances, and paired t tests and time series analysis detected seven highway construction-impacted water quality parameters which were mainly associated with the second principal component. In particular, impacts on turbidity, total suspended solids, and total iron during construction, impacts on chloride and sulfate during and after construction, and impacts on acidity and nitrate after construction were observed at the downstream sites. The construction had statistically significant impacts on macroinvertebrate index scores (i.e., WVSCI) after construction, but did not change the overall good biological condition. Implementing BMPs that address those construction-impacted water quality parameters can be an effective mitigation strategy for future highway construction in this highlands region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
USDA-ARS?s Scientific Manuscript database
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
NASA Astrophysics Data System (ADS)
Shah, Zahir; Islam, Saeed; Gul, Taza; Bonyah, Ebenezer; Altaf Khan, Muhammad
2018-06-01
The current research aims to examine the combined effect of magnetic and electric field on micropolar nanofluid between two parallel plates in a rotating system. The nanofluid flow between two parallel plates is taken under the influence of Hall current. The flow of micropolar nanofluid has been assumed in steady state. The rudimentary governing equations have been changed to a set of differential nonlinear and coupled equations using suitable similarity variables. An optimal approach has been used to acquire the solution of the modelled problems. The convergence of the method has been shown numerically. The impact of the Skin friction on velocity profile, Nusslet number on temperature profile and Sherwood number on concentration profile have been studied. The influences of the Hall currents, rotation, Brownian motion and thermophoresis analysis of micropolar nanofluid have been mainly focused in this work. Moreover, for comprehension the physical presentation of the embedded parameters that is, coupling parameter N1 , viscosity parameter Re , spin gradient viscosity parameter N2 , rotating parameter Kr , Micropolar fluid constant N3 , magnetic parameter M , Prandtl number Pr , Thermophoretic parameter Nt , Brownian motion parameter Nb , and Schmidt number Sc have been plotted and deliberated graphically.
NASA Astrophysics Data System (ADS)
Gandolfi, S.; Poluzzi, L.; Tavasci, L.
2012-12-01
Precise Point Positioning (PPP) is one of the possible approaches for GNSS data processing. As known this technique is faster and more flexible compared to the others which are based on a differenced approach and constitute a reliable methods for accurate positioning of remote GNSS stations, even in some remote area such as Antarctica. Until few years ago one of the major limits of the method was the impossibility to resolve the ambiguity as integer but nowadays many methods are available to resolve this aspect. The first software package permitting a PPP solution was the GIPSY OASIS realized, developed and maintained by JPL (NASA). JPL produce also orbits and files ready to be used with GIPSY. Recently, using these products came possible to resolve ambiguities improving the stability of solutions. PPP permit to estimate position into the reference frame of the orbits (IGS) and when coordinate in others reference frames, such al ITRF, are needed is necessary to apply a transformation. Within his products JPL offer, for each day, a global 7 parameter transformation that permit to locate the survey into the ITRF RF. In some cases it's also possible to create a costumed process and obtain analogous parameters using local/regional reference network of stations which coordinates are available also in the desired reference frame. In this work some tests on accuracy has been carried out comparing different PPP solutions obtained using the same software packages (GIPSY) but considering the ambiguity resolution, the global and regional transformation parameters. In particular two test area have been considered, first one located in Antarctica and the second one in Italy. Aim of the work is the evaluation of the impact of ambiguity resolution and the use of local/regional transformation parameter in the final solutions. Tests shown how the ambiguity resolution improve the precision, especially in the EAST component with a scattering reduction about 8%. And the use of global transformation parameter permit to improve the accuracy of about 59%, 63% and 29% in the three components N E U, but other tests shown how is possible to improve the accuracy of 67% 71% and 53% using regional transformation parameters. Example of the impact of global vs regional parameters transformation in a GPS time series
Astaraie-Imani, Maryam; Kapelan, Zoran; Fu, Guangtao; Butler, David
2012-12-15
Climate change and urbanisation are key factors affecting the future of water quality and quantity in urbanised catchments and are associated with significant uncertainty. The work reported in this paper is an evaluation of the combined and relative impacts of climate change and urbanisation on the receiving water quality in the context of an Integrated Urban Wastewater System (IUWS) in the UK. The impacts of intervening system operational control parameters are also investigated. Impact is determined by a detailed modelling study using both local and global sensitivity analysis methods together with correlation analysis. The results obtained from the case-study analysed clearly demonstrate that climate change combined with increasing urbanisation is likely to lead to worsening river water quality in terms of both frequency and magnitude of breaching threshold dissolved oxygen and ammonium concentrations. The results obtained also reveal the key climate change and urbanisation parameters that have the largest negative impact as well as the most responsive IUWS operational control parameters including major dependencies between all these parameters. This information can be further utilised to adapt future IUWS operation and/or design which, in turn, should make these systems more resilient to future climate and urbanisation changes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
The Impact of Uncertain Physical Parameters on HVAC Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai
HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less
The discrete adjoint method for parameter identification in multibody system dynamics.
Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin
2018-01-01
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.
NASA Astrophysics Data System (ADS)
Duan, B.; Bari, M. A.; Wu, Z. Q.; Jun, Y.; Li, Y. M.; Wang, J. G.
2012-11-01
Aims: We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from N IV to Ne VII. Methods: In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results: A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.
Asteroid (21) Lutetia: Semi-Automatic Impact Craters Detection and Classification
NASA Astrophysics Data System (ADS)
Jenerowicz, M.; Banaszkiewicz, M.
2018-05-01
The need to develop an automated method, independent of lighting and surface conditions, for the identification and measurement of impact craters, as well as the creation of a reliable and efficient tool, has become a justification of our studies. This paper presents a methodology for the detection of impact craters based on their spectral and spatial features. The analysis aims at evaluation of the algorithm capabilities to determinate the spatial parameters of impact craters presented in a time series. In this way, time-consuming visual interpretation of images would be reduced to the special cases. The developed algorithm is tested on a set of OSIRIS high resolution images of asteroid Lutetia surface which is characterized by varied landforms and the abundance of craters created by collisions with smaller bodies of the solar system.The proposed methodology consists of three main steps: characterisation of objects of interest on limited set of data, semi-automatic extraction of impact craters performed for total set of data by applying the Mathematical Morphology image processing (Serra, 1988, Soille, 2003), and finally, creating libraries of spatial and spectral parameters for extracted impact craters, i.e. the coordinates of the crater center, semi-major and semi-minor axis, shadow length and cross-section. The overall accuracy of the proposed method is 98 %, the Kappa coefficient is 0.84, the correlation coefficient is ∼ 0.80, the omission error 24.11 %, the commission error 3.45 %. The obtained results show that methods based on Mathematical Morphology operators are effective also with a limited number of data and low-contrast images.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Influence of heating experiments on parameters of Schumann resonances
NASA Astrophysics Data System (ADS)
Agranat, Irina; Sivokon, Vladimir
2017-10-01
Recently the significant increase in intensity of researches in the field of the fissile impact on geophysical processes in various environments is noted. Special attention is paid to a research of impact on an ionosphere of a potent short-wave radio emission of heating stands. Today experiments on change of an ionosphere are made generally at stands HAARP, EISCAT in Tromse (Norway). Within the Russian campaign (Tomsk) EISCAT/heating (AARI_HFOX) made from October 19 to October 30, 2016 experiments on impact on an ionosphere F-layer by the radiation potent HF. For assessment of impact of these experiments on geophysical processes mathematical methods carried out the analysis of change of the parameters of the Schumann resonances received on the basis of data from the station of constant observation of the Schumann resonances in the city of Tomsk, the Tomsk State University (Russia).
Young addicted men hormone profile detection
NASA Astrophysics Data System (ADS)
Zieliński, Paweł; Wasiewicz, Piotr; Leszczyńska, Bożena; Gromadzka-Ostrowska, Joanna
2010-09-01
Hormone parameters were determined in the serum of young addicted men in order to compare them with those obtained from the group of healthy subjects. Three groups were investigated which were named opiates, mixed and control group. Statistical and data mining methods were applied to obtain significant differences. R package was used for all computation. The determination of hormones parameters provide important information relative to impact of addiction.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
Impact of dynamic distribution of floc particles on flocculation effect.
Nan, Jun; He, Weipeng; Song, Xinin; Li, Guibai
2009-01-01
Polyaluminum chloride (PAC) was used as coagulant and suspended particles in kaolin water. Online instruments including turbidimeter and particle counter were used to monitor the flocculation process. An evaluation model for demonstrating the impact on the flocculation effect was established based on the multiple linear regression analysis method. The parameter of the index weight of channels quantitatively described how the variation of floc particle population in different size ranges cause the decrement of turbidity. The study showed that the floc particles in different size ranges contributed differently to the decrease of turbidity and that the index weight of channel could excellently indicate the impact degree of floc particles dynamic distribution on flocculation effect. Therefore, the parameter may significantly benefit the development of coagulation and sedimentation techniques as well as the optimal coagulant selection.
Impact of induced magnetic field on synovial fluid with peristaltic flow in an asymmetric channel
NASA Astrophysics Data System (ADS)
Afsar Khan, Ambreen; Farooq, Arfa; Vafai, Kambiz
2018-01-01
In this paper, we have worked for the impact of induced magnetic field on peristaltic motion of a non-Newtonian, incompressible, synovial fluid in an asymmetric channel. We have solved the problem for two models, Model-1 which behaves as shear thinning fluid and Model-2 which behaves as shear thickening fluid. The problem is solved by using modified Adomian Decomposition method. It has seen that two models behave quite opposite to each other for some parameters. The impact of various parameters on u, dp/dx, Δp and induced magnetic field bx have been studied graphically. The significant findings of this study is that the size of the trapped bolus and the pressure gradient increases by increasing M for both models.
Determination of effective thoracic mass.
DOT National Transportation Integrated Search
1996-02-01
Effective thoracic mass is a critical parameter in specifying mathematical and mechanical models (such as crash dummies) of humans exposed to impact conditions. A method is developed using a numerical optimizer to determine effective thoracic mass (a...
NASA Astrophysics Data System (ADS)
Slimani, Y.; Hannachi, E.; Azzouz, F. Ben; Salem, M. Ben
2018-06-01
We have reported the influence of planetary high energy ball milling parameters on morphology, microstructure and flux pinning capability of polycrystalline Y3Ba5Cu8Oy. Samples were prepared through the standard solid-state reaction by using two different milling methods, ball milling in a planetary crusher and hand grinding in a mortar. Phase analysis by X-ray diffraction (XRD) method, microstructural examination by scanning electron microscope (SEM), electrical resistivity, the global and intra-granular critical current densities measurements are done to characterize the samples. The processing parameters of the planetary milling have a considerable impact on the final product properties. SEM observations show the presence of nanoscale entities submerged within the Y3Ba5Cu8Oy crystallites. The results show that the fine grain microstructure of the Y3Ba5Cu8Oy bulk induced by ball milling process contributes to critical currents density enhancement in the magnetic field and promotes an optimized flux pinning ability.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David; Johnson, Kenneth
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.
2017-12-01
Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.
Simulation of uphill/downhill running on a level treadmill using additional horizontal force.
Gimenez, Philippe; Arnal, Pierrick J; Samozino, Pierre; Millet, Guillaume Y; Morin, Jean-Benoit
2014-07-18
Tilting treadmills allow a convenient study of biomechanics during uphill/downhill running, but they are not commonly available and there is even fewer tilting force-measuring treadmill. The aim of the present study was to compare uphill/downhill running on a treadmill (inclination of ± 8%) with running on a level treadmill using additional backward or forward pulling forces to simulate the effect of gravity. This comparison specifically focused on the energy cost of running, stride frequency (SF), electromyographic activity (EMG), leg and foot angles at foot strike, and ground impact shock. The main results are that SF, impact shock, and leg and foot angle parameters determined were very similar and significantly correlated between the two methods, the intercept and slope of the linear regression not differing significantly from zero and unity, respectively. The correlation of oxygen uptake (V̇O2) data between both methods was not significant during uphill running (r=0.42; P>0.05). V̇O2 data were correlated during downhill running (r=0.74; P<0.01) but there was a significant difference between the methods (bias=-2.51 ± 1.94 ml min(-1) kg(-1)). Linear regressions for EMG of vastus lateralis, biceps femoris, gastrocnemius lateralis, soleus and tibialis anterior were not different from the identity line but the systematic bias was elevated for this parameter. In conclusion, this method seems appropriate for the study of SF, leg and foot angle, impact shock parameters but is less applicable for physiological variables (EMG and energy cost) during uphill/downhill running when using a tilting force-measuring treadmill is not possible. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling for waste management associated with environmental-impact abatement under uncertainty.
Li, P; Li, Y P; Huang, G H; Zhang, J L
2015-04-01
Municipal solid waste (MSW) treatment can generate significant amounts of pollutants, and thus pose a risk on human health. Besides, in MSW management, various uncertainties exist in the related costs, impact factors, and objectives, which can affect the optimization processes and the decision schemes generated. In this study, a life cycle assessment-based interval-parameter programming (LCA-IPP) method is developed for MSW management associated with environmental-impact abatement under uncertainty. The LCA-IPP can effectively examine the environmental consequences based on a number of environmental impact categories (i.e., greenhouse gas equivalent, acid gas emissions, and respiratory inorganics), through analyzing each life cycle stage and/or major contributing process related to various MSW management activities. It can also tackle uncertainties existed in the related costs, impact factors, and objectives and expressed as interval numbers. Then, the LCA-IPP method is applied to MSW management for the City of Beijing, the capital of China, where energy consumptions and six environmental parameters [i.e., CO2, CO, CH4, NOX, SO2, inhalable particle (PM10)] are used as systematic tool to quantify environmental releases in entire life cycle stage of waste collection, transportation, treatment, and disposal of. Results associated with system cost, environmental impact, and the related policy implication are generated and analyzed. Results can help identify desired alternatives for managing MSW flows, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty.
User-customized brain computer interfaces using Bayesian optimization
NASA Astrophysics Data System (ADS)
Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali
2016-04-01
Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.
NASA Astrophysics Data System (ADS)
Asgari, Ali; Dehestani, Pouya; Poruraminaie, Iman
2018-02-01
Shot peening is a well-known process in applying the residual stress on the surface of industrial parts. The induced residual stress improves fatigue life. In this study, the effects of shot peening parameters such as shot diameter, shot speed, friction coefficient, and the number of impacts on the applied residual stress will be evaluated. To assess these parameters effect, firstly the shot peening process has been simulated by finite element method. Then, effects of the process parameters on the residual stress have been evaluated by response surface method as a statistical approach. Finally, a strong model is presented to predict the maximum residual stress induced by shot peening process in AISI 4340 steel. Also, the optimum parameters for the maximum residual stress are achieved. The results indicate that effect of shot diameter on the induced residual stress is increased by increasing the shot speed. Also, enhancing the friction coefficient magnitude always cannot lead to increase in the residual stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi
ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less
USDA-ARS?s Scientific Manuscript database
There has been growing concern about methods used to measure the CO2 photocompensation point, a vital parameter to model leaf photosynthesis. the CO2 photocompensation point is often measured as the common intercept of several CO2 response curves, but this method may over-estimate the CO2 photocompe...
Damping torque analysis of VSC-based system utilizing power synchronization control
NASA Astrophysics Data System (ADS)
Fu, Q.; Du, W. J.; Zheng, K. Y.; Wang, H. F.
2017-05-01
Power synchronization control is a new control strategy of VSC-HVDC for connecting a weak power system. Different from the vector control method, this control method utilizes the internal synchronization mechanism in ac systems, in principle, similar to the operation of a synchronous machine. So that the parameters of controllers in power synchronization control will change the electromechanical oscillation modes and make an impact on the transient stability of power system. This paper present a mathematical model for small-signal stability analysis of VSC station used power synchronization control and analyse the impact of the dynamic interactions by calculating the contribution of the damping torque from the power synchronization control, besides, the parameters of controllers which correspond to damping torque and synchronous torque in the power synchronization control is defined respectively. At the end of the paper, an example power system is presented to demonstrate and validate the theoretical analysis and associated conclusions are made.
Mining method selection by integrated AHP and PROMETHEE method.
Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana
2012-03-01
Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.
An Evaluation of Compressed Work Schedules and Their Impact on Electricity Use
2010-03-01
problems by introducing uncertainty to the known parameters of a given process ( Sobol , 1975). The MCS output represents approximate values of the...process within the observed parameters; the output is provided within a statistical distribution of likely outcomes ( Sobol , 1975). 31 In this...The Monte Carlo method is appropriate for “any process whose development is affected by random factors” ( Sobol , 1975:10). MCS introduces
Traveltime inversion and error analysis for layered anisotropy
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhou, Hua-wei
2011-02-01
While tilted transverse isotropy (TTI) is a good approximation of the velocity structure for many dipping and fractured strata, it is still challenging to estimate anisotropic depth models even when the tilted angle is known. With the assumption of weak anisotropy, we present a TTI traveltime inversion approach for models consisting of several thickness-varying layers where the anisotropic parameters are constant for each layer. For each model layer the inversion variables consist of the anisotropic parameters ɛ and δ, the tilted angle φ of its symmetry axis, layer velocity along the symmetry axis, and thickness variation of the layer. Using this method and synthetic data, we evaluate the effects of errors in some of the model parameters on the inverted values of the other parameters in crosswell and Vertical Seismic Profile (VSP) acquisition geometry. The analyses show that the errors in the layer symmetry axes sensitively affect the inverted values of other parameters, especially δ. However, the impact of errors in δ on the inversion of other parameters is much less than the impact on δ from the errors in other parameters. Hence, a practical strategy is first to invert for the most error-tolerant parameter layer velocity, then progressively invert for ɛ in crosswell geometry or δ in VSP geometry.
Drop impact into a deep pool: vortex shedding and jet formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agbaglah, G.; Thoraval, M. -J.; Thoroddsen, S. T.
2015-02-01
One of the simplest splashing scenarios results from the impact of a single drop on a deep pool. The traditional understanding of this process is that the impact generates an axisymmetric sheet-like jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are transitions in the multiplicity of jets as the experimental parameters are varied. Here, we use experiments and numerical simulations of a single drop impacting on a deep pool to examine themore » transition from impacts that produce a single jet to those that produce two jets. Using high-speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet. Using numerical simulations we develop a phase diagram for this transition and show that the capillary number is the most appropriate order parameter for the transition.« less
Thermal oil recovery method using self-contained windelectric sets
NASA Astrophysics Data System (ADS)
Belsky, A. A.; Korolyov, I. A.
2018-05-01
The paper reviews challenges associated with questions of efficiency of thermal methods of impact on productive oil strata. The concept of using electrothermal complexes with WEG power supply for the indicated purposes was proposed and justified, their operating principles, main advantages and disadvantages, as well as a schematechnical solution for the implementation of the intensification of oil extraction, were considered. A mathematical model for finding the operating characteristics of WEG is presented and its main energy parameters are determined. The adequacy of the mathematical model is confirmed by laboratory simulation stand tests with nominal parameters.
Temperature Effect of Low Velocity Impact Resistance of Glass/epoxy Laminates
NASA Astrophysics Data System (ADS)
Kang, Ki-Weon; Kim, Heung-Seob; Chung, Tae-Jin; Koh, Seung-Kee
This paper aims to evaluate the effect of temperature on impact damage resistance of glass/epoxy laminates. A series of impact tests were performed using an instrumented impact-testing machine at temperature ranging from -40°C to +80°C. The resulting impact damage was measured using back light method. The impact resistance parameters were employed to understand the damage resistance. It was observed that temperature has a little effect on the impact responses of composite laminates. The damage resistance of glass/epoxy laminates is somewhat deteriorated at two opposite extremes of the studied temperature range and this behavior is likely due to the property change of glass/epoxy laminates under extreme temperatures
NASA Astrophysics Data System (ADS)
Russo, T. A.; Devineni, N.; Lall, U.
2015-12-01
Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2001-01-01
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Woo-Pyo; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Applied Physics and Department of Bionanotechnology, Hanyang University, Ansan, Kyunggi-Do 426-791
2015-01-15
The influence of quantum diffraction and shielding on the electron-ion collision process is investigated in two-component semiclassical plasmas. The eikonal method and micropotential taking into account the quantum diffraction and shielding are used to obtain the eikonal scattering phase shift and the eikonal collision cross section as functions of the collision energy, density parameter, Debye length, electron de Broglie wavelength, and the impact parameter. The result shows that the quantum diffraction and shielding effects suppress the eikonal scattering phase shift as well as the differential eikonal collision cross section, especially, in small-impact parameter regions. It is also shown that themore » quantum shielding effect on the eikonal collision cross section is more important in low-collision energies. In addition, it is found that the eikonal collision cross section increases with an increase in the density parameter. The variations of the eikonal cross section due to the quantum diffraction and shielding effects are also discussed.« less
Fuss, Franz Konstantin; Düking, Peter; Weizman, Yehuda
2018-01-01
This paper provides the evidence of a sweet spot on the boot/foot as well as the method for detecting it with a wearable pressure sensitive device. This study confirmed the hypothesized existence of sweet and dead spots on a soccer boot or foot when kicking a ball. For a stationary curved kick, kicking the ball at the sweet spot maximized the probability of scoring a goal (58-86%), whereas having the impact point at the dead zone minimized the probability (11-22%). The sweet spot was found based on hypothesized favorable parameter ranges (center of pressure in x/y-directions and/or peak impact force) and the dead zone based on hypothesized unfavorable parameter ranges. The sweet spot was rather concentrated, independent of which parameter combination was used (two- or three-parameter combination), whereas the dead zone, located 21 mm from the sweet spot, was more widespread.
Kapitanov, Georgi I; Ayati, Bruce P; Martin, James A
2017-01-01
Osteoarthritis (OA) is a disease characterized by degeneration of joint cartilage. It is associated with pain and disability and is the result of either age and activity related joint wear or an injury. Non-invasive treatment options are scarce and prevention and early intervention methods are practically non-existent. The modeling effort presented in this article is constructed based on an emerging biological hypothesis-post-impact oxidative stress leads to cartilage cell apoptosis and hence the degeneration observed with the disease. The objective is to quantitatively describe the loss of cell viability and function in cartilage after an injurious impact and identify the key parameters and variables that contribute to this phenomenon. We constructed a system of differential equations that tracks cell viability, mitochondrial function, and concentrations of reactive oxygen species (ROS), adenosine triphosphate (ATP), and glycosaminoglycans (GAG). The system was solved using MATLAB and the equations' parameters were fit to existing data using a particle swarm algorithm. The model fits well the available data for cell viability, ATP production, and GAG content. Local sensitivity analysis shows that the initial amount of ROS is the most important parameter. The model we constructed is a viable method for producing in silico studies and with a few modifications, and data calibration and validation, may be a powerful predictive tool in the search for a non-invasive treatment for post-traumatic osteoarthritis.
Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bush, Lance B.
1992-01-01
An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.
Viscous dissipation impact on MHD free convection radiating fluid flow past a vertical porous plate
NASA Astrophysics Data System (ADS)
Raju, R. Srinivasa; Reddy, G. Jithender; Kumar, M. Anil
2018-05-01
An attempt has been made to study the radiation effects on unsteady MHD free convective flow of an incompressible fluid past an infinite vertical porous plate in the presence of viscous dissipation. The governing partial differential equations are solved numerically by using Galerkin finite element method. Computations were performed for a wide range of governing flow parameters viz., Magnetic Parameter, Schmidt number, Thermal radiation, Prandtl number, Eckert number and Permeability parameter. The effects of these flow parameters on velocity, temperature are shown graphically. In addition the local values of the Skin friction coefficient are shown in tabular form.
Development of Dimensionless Index Assessing Low Impact Development in Urban Areas
NASA Astrophysics Data System (ADS)
Jun, S. H.; Lee, E. H.; Kim, J. H.
2017-12-01
Because the rapid urbanization and industrialization have increased the impervious area of watersheds, inundation in urban area and water pollution of river by non-point pollutants have caused serious problems for a long time. Low Impact Development (LID) techniques have been implemented for the solution of these problems due to its cost effectiveness for mitigating the water quality and quantity impact on urban areas. There have been many studies about the effectiveness of LID, but there is a lack of research on developing an index for the assessment of LID performance. In this study, the dimensionless reliability index of LID is proposed. The index is developed using Distance Measure Method (DMM). DMM is used to consider the parameters that have different units. The parameters for reliability of LID are the amount of pollutant at the outfall and the flooding volume. Both parameters become dimensionless index by DMM. Weighted factors in dimensionless index are considered to realize the behavior of reliability for the variation of importance to the parameters. LID is applied to an actual area called Gasan city in Seoul, South Korea where inundation is frequently occurred. The reliability is estimated for 16 different rainfall events. For each rainfall event, the parameters with LID installation are compared with those of no LID installation. Depending on which parameter is considered more important, the results showed difference. In conclusion, the optimal locations of LID are suggested as the weighted factors change.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
2016-01-01
Purpose The objective of this study was to investigate the relationships between primary implant stability as measured by impact response frequency and the structural parameters of trabecular bone using cone-beam computed tomography(CBCT), excluding the effect of cortical bone thickness. Methods We measured the impact response of a dental implant placed into swine bone specimens composed of only trabecular bone without the cortical bone layer using an inductive sensor. The peak frequency of the impact response spectrum was determined as an implant stability criterion (SPF). The 3D microstructural parameters were calculated from CT images of the bone specimens obtained using both micro-CT and CBCT. Results SPF had significant positive correlations with trabecular bone structural parameters (BV/TV, BV, BS, BSD, Tb.Th, Tb.N, FD, and BS/BV) (P<0.01) while SPF demonstrated significant negative correlations with other microstructural parameters (Tb.Sp, Tb.Pf, and SMI) using micro-CT and CBCT (P<0.01). Conclusions There was an increase in implant stability prediction by combining BV/TV and SMI in the stepwise forward regression analysis. Bone with high volume density and low surface density shows high implant stability. Well-connected thick bone with small marrow spaces also shows high implant stability. The combination of bone density and architectural parameters measured using CBCT can predict the implant stability more accurately than the density alone in clinical diagnoses. PMID:27127692
Global behavior of a vibro-impact system with asymmetric clearances
NASA Astrophysics Data System (ADS)
Li, Guofang; Ding, Wangcai
2018-06-01
A simple dynamic model of a vibro-impact system subjected to harmonic excitation with two asymmetric clearances is considered. The Semi-Analytical Method for getting periodic solutions of the vibro-impact system is proposed. Diversity and evolution of the fundamental periodic impact motions are analyzed. The formation mechanism of the complete chatting-impact periodic motion with sticking motion by the influence of gazing bifurcation is analyzed. The transitional law of periodic motions in the periodical inclusions area is presented. The coexistence of periodic motions and the extreme sensitivity of the initial value within the high frequency region are studied. The global distribution of the periodic and chaos motions of the system is obtained by the state-parameter space co-simulation method which very few have considered before. The distribution of the attractor and the corresponding attracting domain corresponding to different periodic motions are also studied.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
NASA Astrophysics Data System (ADS)
Lee, Myoung-Jae; Jung, Young-Dae
2017-05-01
The influence of nonisothermal and quantum shielding on the electron-ion collision process is investigated in strongly coupled two-temperature plasmas. The eikonal method is employed to obtain the eikonal scattering phase shift and eikonal cross section as functions of the impact parameter, collision energy, electron temperature, ion temperature, Debye length, and de Broglie wavelength. The results show that the quantum effect suppresses the eikonal scattering phase shift for the electron-ion collision in two-temperature dense plasmas. It is also found that the differential eikonal cross section decreases for small impact parameters. However, it increases for large impact parameters with increasing de Broglie wavelength. It is also found that the maximum position of the differential eikonal cross section is receded from the collision center with an increase in the nonisothermal character of the plasma. In addition, it is found that the total eikonal cross sections in isothermal plasmas are always greater than those in two-temperature plasmas. The variations of the eikonal cross section due to the two-temperature and quantum shielding effects are also discussed.
Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking
NASA Technical Reports Server (NTRS)
Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward
2011-01-01
To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Wind Plant Performance Prediction (WP3) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Anna
The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less
NASA Astrophysics Data System (ADS)
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
The cloud radiation impact from optics simulation and airborne observation
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Kuznetsov, Anatoly; Gatebe, Charles
2017-02-01
The analytical approach of inverse asymptotic formulas of the radiative transfer theory is used for solving inverse problems of cloud optics. The method has advantages because it does not impose strict constraints, but it is tied to the desired solution. Observations are accomplished in extended stratus cloudiness, above a homogeneous ocean surface. Data from NASA`s Cloud Absorption Radiometer (CAR) during two airborne experiments (SAFARI-2000 and ARCTAS-2008) were analyzed. The analytical method of inverse asymptotic formulas was used to retrieve cloud optical parameters (optical thickness, single scattering albedo and asymmetry parameter of the phase function) and ground albedo in all 8 spectral channels independently. The method is free from a priori restrictions and there is no links to parameters, and it has been applied to data set of different origin and geometry of observations. Results obtained from different airborne, satellite and ground radiative experiments appeared consistence and showed common features of values of cloud parameters and its spectral dependence (Vasiluev, Melnikova, 2004; Gatebe et al., 2014). Optical parameters, retrieved here, are used for calculation of radiative divergence, reflected and transmitted irradiance and heating rates in cloudy atmosphere, that agree with previous observational data.
PMMA/PS coaxial electrospinning: a statistical analysis on processing parameters
NASA Astrophysics Data System (ADS)
Rahmani, Shahrzad; Arefazar, Ahmad; Latifi, Masoud
2017-08-01
Coaxial electrospinning, as a versatile method for producing core-shell fibers, is known to be very sensitive to two classes of influential factors including material and processing parameters. Although coaxial electrospinning has been the focus of many studies, the effects of processing parameters on the outcomes of this method have not yet been well investigated. A good knowledge of the impacts of processing parameters and their interactions on coaxial electrospinning can make it possible to better control and optimize this process. Hence, in this study, the statistical technique of response surface method (RSM) using the design of experiments on four processing factors of voltage, distance, core and shell flow rates was applied. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), oil immersion and Fluorescent microscopy were used to characterize fiber morphology. The core and shell diameters of fibers were measured and the effects of all factors and their interactions were discussed. Two polynomial models with acceptable R-squares were proposed to describe the core and shell diameters as functions of the processing parameters. Voltage and distance were recognized as the most significant and influential factors on shell diameter, while core diameter was mainly under the influence of core and shell flow rates besides the voltage.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining
2017-11-01
Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Lee, Seungsoo; An, Hyunuk; Kawaike, Kenji; Nakagawa, Hajime
2016-11-01
An urban flood is an integrated phenomenon that is affected by various uncertainty sources such as input forcing, model parameters, complex geometry, and exchanges of flow among different domains in surfaces and subsurfaces. Despite considerable advances in urban flood modeling techniques, limited knowledge is currently available with regard to the impact of dynamic interaction among different flow domains on urban floods. In this paper, an ensemble method for urban flood modeling is presented to consider the parameter uncertainty of interaction models among a manhole, a sewer pipe, and surface flow. Laboratory-scale experiments on urban flood and inundation are performed under various flow conditions to investigate the parameter uncertainty of interaction models. The results show that ensemble simulation using interaction models based on weir and orifice formulas reproduces experimental data with high accuracy and detects the identifiability of model parameters. Among interaction-related parameters, the parameters of the sewer-manhole interaction show lower uncertainty than those of the sewer-surface interaction. Experimental data obtained under unsteady-state conditions are more informative than those obtained under steady-state conditions to assess the parameter uncertainty of interaction models. Although the optimal parameters vary according to the flow conditions, the difference is marginal. Simulation results also confirm the capability of the interaction models and the potential of the ensemble-based approaches to facilitate urban flood simulation.
Effect of Vaccine Administration Modality on Immunogenicity and Efficacy
Zhang, Lu; Wang, Wei; Wang, Shixia
2016-01-01
Summary The many factors impacting the efficacy of a vaccine can be broadly divided into three categories: (1) features of the vaccine itself, including immunogen design, vaccine type, formulation, adjuvant, and dosing; (2) individual variations among vaccine recipients; and (3) vaccine administration-related parameters. While much literature exists related to vaccines, and recently systems biology has started to dissect the impact of individual subject variation on vaccine efficacy, few studies have focused on the role of vaccine administration-related parameters on vaccine efficacy. Parenteral and mucosal vaccinations are traditional approaches for licensed vaccines; novel vaccine delivery approaches, including needless injection and adjuvant formulations, are being developed to further improve vaccine safety and efficacy. This review provides a brief summary of vaccine administration-related factors, including vaccination approach, delivery route, and method of administration, to gain a better understanding of their potential impact on the safety and immunogenicity of candidate vaccines. PMID:26313239
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Sotiris, Kellas
2006-01-01
Static 3-point bend tests of Reinforced Carbon-Carbon (RCC) were conducted to failure to provide data for additional validation of an LS-DYNA RCC model suitable for predicting the threshold of impact damage to shuttle orbiter wing leading edges. LS-DYNA predictions correlated well with the average RCC failure load, and were good in matching the load vs. deflection. However, correlating the detectable damage using NDE methods with the cumulative damage parameter in LS-DYNA material model 58 was not readily achievable. The difficulty of finding internal RCC damage with NDE and the high sensitivity of the mat58 damage parameter to the load near failure made the task very challenging. In addition, damage mechanisms for RCC due to dynamic impact of debris such as foam and ice and damage mechanisms due to a static loading were, as expected, not equivalent.
Control Method Stretches Suspensions by Measuring the Sag of Strands in Cable-Stayed Bridges
NASA Astrophysics Data System (ADS)
Bętkowski, Piotr
2017-10-01
In the article is described the method that allows on evaluation and validation of measurement correctness of dynamometers (strain gauges, tension meters) used in systems of suspensions. Control of monitoring devices such as dynamometers is recommended in inspections of suspension bridges. Control device (dynamometer) works with an anchor, and the degree of this cooperation could have a decisive impact on the correctness of the results. Method, which determines the stress in the strand (cable), depending on the sag of stayed cable, is described. This method can be used to control the accuracy of measuring devices directly on the bridge. By measuring the strand sag, it is possible to obtain information about the strength (force) which occurred in the suspension cable. Digital camera is used for the measurement of cable sag. Control measurement should be made independently from the controlled parameter but should verify this parameter directly (it is the best situation). In many cases in practice the controlled parameter is not designation by direct measurement, but the calculations, i.e. relation measured others parameters, as in the method described in the article. In such cases occurred the problem of overlapping error of measurement of intermediate parameters (data) and the evaluation of the reliability of the results. Method of control calculations made in relation to installed in the bridge measuring devices is doubtful without procedure of uncertainty estimation. Such an assessment of the accuracy can be performed using the interval numbers. With the interval numbers are possible the analysis of parametric relationship accuracy of the designation of individual parameters and uncertainty of results. Method of measurements, relations and analytical formulas, and numerical example can be found in the text of the article.
Energy-conserving impact algorithm for the heel-strike phase of gait.
Kaplan, M L; Heegaard, J H
2000-06-01
Significant ground reaction forces exceeding body weight occur during the heel-strike phase of gait. The standard methods of analytical dynamics used to solve the impact problem do not accommodate well the heel-strike collision due to the persistent contact at the front foot and presence of contact at the back foot. These methods can cause a non-physical energy gain on the order of the total kinetic energy of the system at impact. Additionally, these standard techniques do not quantify the contact force, but the impulse over the impact. We present an energy-conserving impact algorithm based on the penalty method to solve for the ground reaction forces during gait. The rigid body assumptions are relaxed and the bodies are allowed to penetrate one another to a small degree. Associated with the deformation is a potential, from which the contact forces are derived. The empirical coefficient-of-restitution used in the standard approaches is replaced by two parameters to characterize the stiffness and the damping of the materials. We solve two simple heel-strike models to illustrate the shortcomings of a standard approach and the suitability of the proposed method for use with gait.
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Portanova, Marc A.
1995-01-01
This paper summarizes three areas of research which were performed to characterize out-of-plane properties of composite materials. In the first investigation, a series of tests was run to characterize the through-the-thickness tensile strength for a variety of composites that included 2D braids, 2D and 3D weaves, and prepreg tapes. A new test method based on a curved beam was evaluated. Failures were significantly different between the 2D materials and the 3D weaves. The 2D materials delaminated between layers due to out-of-plane tensile stresses while the 3D weaves failed due to the formation of radial cracks between the surface plies caused by high circumferential stresses along the inner radius. The strength of the 2D textile composites did not increase relative to the tapes. Final failure in the 3D weaves was caused by a circumferential crack similar to the 2D materials and occurred at a lower bending moment than in other materials. The early failures in the 3D weaves were caused by radial crack formation rather than a low through-the-thickness strength. The second investigation focused on the development of a standard impact test method to measure impact damage resistance. The only impact tests that currently exist are compression after impact (CAI) tests which incorporate elements of both damage resistance and damage tolerance. A new impact test method is under development which uses a quasi-static indentation (QSI) test to directly measure damage resistance. Damage resistance is quantified in terms of the contact force to produce a unit of damage where a metric for damage may be area in C-scan, depth of residual dent , penetration, damage growth, etc. A final draft of an impact standard that uses a QSI test method will be presented to the ASTM Impact Task Group on impact. In the third investigation, the impact damage resistance behavior of a variety of textile materials was studied using the QSI test method. In this study, the force where large damage initiates was measured and the delamination size as a function of force was determined. The force to initiate large damage was significantly lower in braids and weaves. The delamination diameter - impact forace relationship was quanitfied using a damage resistance parameter, Q(*), which related delamination diameter to imapct force over a range of delamination sizes. Using this Q(*) parameter to rate the materials, the stitched uniweaves, toughened epoxy tapes, and through-the-thickness orthogonal interlock weave were the most damage resistant.
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
2016-11-22
With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less
NASA Astrophysics Data System (ADS)
Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.
2017-07-01
An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less
Charan, J; Saxena, D
2014-01-01
Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
DOT National Transportation Integrated Search
1988-06-01
The aim of this study was to shed some light on brain injuries following blows to the head by means of the methods of both cognitive psychology and electrophysiology. More precisely, boxers' attention mechanisms and their capacity of orienting toward...
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
Selection of noisy measurement locations for error reduction in static parameter identification
NASA Astrophysics Data System (ADS)
Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.
1992-09-01
An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.
Leishman, Timothy W; Anderson, Brian E
2013-07-01
The parameters of moving-coil loudspeaker drivers are typically determined using direct electrical excitation and measurement. However, as electro-mechano-acoustical devices, their parameters should also follow from suitable mechanical or acoustical evaluations. This paper presents the theory of an acoustical method of excitation and measurement using normal-incidence sound transmission through a baffled driver as a plane-wave tube partition. Analogous circuits enable key parameters to be extracted from measurement results in terms of open and closed-circuit driver conditions. Associated tools are presented that facilitate adjacent field decompositions and derivations of sound transmission coefficients (in terms of driver parameters) directly from the circuits. The paper also clarifies the impact of nonanechoic receiving tube terminations and the specific benefits of downstream field decompositions.
A downscaling method for the assessment of local climate change
NASA Astrophysics Data System (ADS)
Bruno, E.; Portoghese, I.; Vurro, M.
2009-04-01
The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Transient Finite Element Analyses Developed to Model Fan Containment Impact Events
NASA Technical Reports Server (NTRS)
Pereira, J. Michael
1997-01-01
Research is underway to establish an increased level of confidence in existing numerical techniques for predicting transient behavior when the fan of a jet engine is released and impacts the fan containment system. To evaluate the predictive accuracy that can currently be obtained, researchers at the NASA Lewis Research Center used the DYNA 3D computer code to simulate large-scale subcomponent impact tests that were conducted at the University of Dayton Research Institute (UDRI) Impact Physics Lab. In these tests, 20- by 40-in. flat metal panels, contoured to the shape of a typical fan case, were impacted by the root section of a fan blade. The panels were oriented at an angle to the path of the projectile that would simulate the conditions in an actual blade-out event. The metal panels were modeled in DYNA 3D using a kinematic hardening model with the strain rate dependence of the yield stress governed by the Cowper-Simons rule. Failure was governed by the effective plastic strain criterion. The model of the fan blade and case just after impact is shown. By varying the maximum effective plastic strain, we obtained good qualitative agreement between the model and the experiments. Both the velocity required to penetrate the case and the deflection during impact compared well. This indicates that the failure criterion and constitutive model may be appropriate, but for DYNA 3D to be useful as a predictive tool, methods to determine accurate model parameters must be established. Simple methods for measuring model parameters are currently being developed. In addition, alternative constitutive models and failure criteria are being investigated.
NASA Astrophysics Data System (ADS)
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
COMPARISON OF DIRECT AND INDIRECT IMPACTS OF FECAL CONTAMINATION IN TWO DIFFERENT WATERSHEDS
There are many environmental parameters that could affect the accuracy of microbial source tracking (MST) methods. Spatial and temporal determinants are among the most common factors missing in MST studies. To understand how spatial and temporal variability affect the level of fe...
Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng
2008-11-01
The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.
Operational stability prediction in milling based on impact tests
NASA Astrophysics Data System (ADS)
Kiss, Adam K.; Hajdu, David; Bachrathy, Daniel; Stepan, Gabor
2018-03-01
Chatter detection is usually based on the analysis of measured signals captured during cutting processes. These techniques, however, often give ambiguous results close to the stability boundaries, which is a major limitation in industrial applications. In this paper, an experimental chatter detection method is proposed based on the system's response for perturbations during the machining process, and no system parameter identification is required. The proposed method identifies the dominant characteristic multiplier of the periodic dynamical system that models the milling process. The variation of the modulus of the largest characteristic multiplier can also be monitored, the stability boundary can precisely be extrapolated, while the manufacturing parameters are still kept in the chatter-free region. The method is derived in details, and also verified experimentally in laboratory environment.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Shreya; de, Sunil Kumar
2014-05-01
In the present paper an attempt has been made to propose RS-GIS based method for erosion vulnerability zonation for the entire river based on simple techniques that requires very less field investigation. This method consist of 8 parameters, such as, rainfall erosivity, lithological factor, bank slope, meander index, river gradient, soil erosivity, vegetation cover and anthropogenic impact. Meteorological data, GSI maps, LISS III (30m resolution), SRTM DEM (56m resolution) and Google Images have been used to determine rainfall erosivity, lithological factor, bank slope, meander index, river gradient, vegetation cover and anthropogenic impact; Soil map of the NBSSLP, India has been used for assessing Soil Erosivity index. By integrating the individual values of those six parameters (the 1st two parameters are remained constant for this particular study area) a bank erosion vulnerability zonation map of the River Haora, Tripura, India (23°37' - 23°53'N and 91°15'-91°37'E) has been prepared. The values have been compared with the existing BEHI-NBS method of 60 spots and also with field data of 30 cross sections (covering the 60 spots) taken along 51 km stretch of the river in Indian Territory and found that the estimated values are matching with the existing method as well as with field data. The whole stretch has been divided into 5 hazard zones, i.e. Very High, High, Moderate, Low and Very Low Hazard Zones and they are covering 5.66 km, 16.81 km, 40.82km, 29.67 km and 9.04 km respectively. KEY WORDS: Bank erosion, Bank Erosion Hazard Index (BEHI), Near Bank Stress (NBS), Erosivity, Bank Erosion Vulnerability Zonation.
Alshalani, Abdulrahman; Howell, Anita; Acker, Jason P
2018-02-01
Several factors have been proposed to influence the red blood cell storage lesion including storage duration, blood component manufacturing methodology, and donor characteristics [1,18]. The objectives of this study were to determine the impact of manufacturing method and donor characteristics on water permeability and membrane quality parameters. Red blood cell units were obtained from volunteer blood donors and grouped according to the manufacturing method and donor characteristics of sex and age. Membrane water permeability and membrane quality parameters, including deformability, hemolysis, osmotic fragility, hematologic indices, supernatant potassium, and supernatant sodium, were determined on day 5 ± 2, day 21, and day 42. Regression analysis was applied to evaluate the contribution of storage duration, manufacturing method, and donor characteristics on storage lesion. This study found that units processed using a whole blood filtration manufacturing method exhibited significantly higher membrane water permeability throughout storage compared to units manufactured using red cell filtration. Additionally, significant differences in hemolysis, supernatant potassium, and supernatant sodium were seen between manufacturing methods, however there were no significance differences between donor age and sex groups. Findings of this study suggest that the membrane-related storage lesion is initiated prior to the first day of storage with contributions by both blood manufacturing process and donor variability. The findings of this work highlight the importance of characterizing membrane water permeability during storage as it can be a predictor of the biophysical and chemical changes that affect the quality of stored red blood cells during hypothermic storage. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ahmad, Salman; Ijaz Khan, M.; Alsaedi, A.
2018-05-01
In this article we investigate the flow of Sutterby liquid due to rotating stretchable disk. Mass and heat transport are analyzed through Brownian diffusion and thermophoresis. Further the effects of magnetic field, chemical reaction and heat source are also accounted. We employ transformation procedure to obtain a system of nonlinear ODE’s. This system is numerically solved by Built-in-Shooting method. Impacts of different involved parameter on velocity, temperature and concentration are described. Velocity, concentration and temperature gradients are numerically computed. Obtained results show that velocity is reduced through material parameter. Temperature and concentration are enhanced with thermophoresis parameter.
'Scaling' analysis of the ice accretion process on aircraft surfaces
NASA Technical Reports Server (NTRS)
Keshock, E. G.; Tabrizi, A. H.; Missimer, J. R.
1982-01-01
A comprehensive set of scaling parameters is developed for the ice accretion process by analyzing the energy equations of the dynamic freezing zone and the already frozen ice layer, the continuity equation associated with supercooled liquid droplets entering into and impacting within the dynamic freezing zone, and energy equation of the ice layer. No initial arbitrary judgments are made regarding the relative magnitudes of each of the terms. The method of intrinsic reference variables in employed in order to develop the appropriate scaling parameters and their relative significance in rime icing conditions in an orderly process, rather than utilizing empiricism. The significance of these parameters is examined and the parameters are combined with scaling criteria related to droplet trajectory similitude.
A Modelling Method of Bolt Joints Based on Basic Characteristic Parameters of Joint Surfaces
NASA Astrophysics Data System (ADS)
Yuansheng, Li; Guangpeng, Zhang; Zhen, Zhang; Ping, Wang
2018-02-01
Bolt joints are common in machine tools and have a direct impact on the overall performance of the tools. Therefore, the understanding of bolt joint characteristics is essential for improving machine design and assembly. Firstly, According to the experimental data obtained from the experiment, the stiffness curve formula was fitted. Secondly, a finite element model of unit bolt joints such as bolt flange joints, bolt head joints, and thread joints was constructed, and lastly the stiffness parameters of joint surfaces were implemented in the model by the secondary development of ABAQUS. The finite element model of the bolt joint established by this method can simulate the contact state very well.
Detection of antipersonnel (AP) mines using mechatronics approach
NASA Astrophysics Data System (ADS)
Shahri, Ali M.; Naghdy, Fazel
1998-09-01
At present there are approximately 110 million land-mines scattered around the world in 64 countries. The clearance of these mines takes place manually. Unfortunately, on average for every 5000 mines cleared one mine clearer is killed. A Mine Detector Arm (MDA) using mechatronics approach is under development in this work. The robot arm imitates manual hand- prodding technique for mine detection. It inserts a bayonet into the soil and models the dynamics of the manipulator and environment parameters, such as stiffness variation in the soil to control the impact caused by contacting a stiff object. An explicit impact control scheme is applied as the main control scheme, while two different intelligent control methods are designed to deal with uncertainties and varying environmental parameters. Firstly, a neuro-fuzzy adaptive gain controller (NFAGC) is designed to adapt the force gain control according to the estimated environment stiffness. Then, an adaptive neuro-fuzzy plus PID controller is employed to switch from a conventional PID controller to neuro-fuzzy impact control (NFIC), when an impact is detected. The developed control schemes are validated through computer simulation and experimental work.
A Framework to Assess the Cumulative Hydrological Impacts of Dams on flow Regime
NASA Astrophysics Data System (ADS)
Wang, Y.; Wang, D.
2016-12-01
In this study we proposed a framework to assess the cumulative impact of dams on hydrological regime, and the impacts of the Three Gorges Dam on flow regime in Yangtze River were investigated with the framework. We reconstructed the unregulated flow series to compare with the regulated flow series in the same period. Eco-surplus and eco-deficit and the Indicators of Hydrologic Alteration parameters were used to examine the hydrological regime change. Among IHA parameters, Wilcoxon signed-rank test and Principal Components Analysis identified the representative indicators of hydrological alterations. Eco-surplus and eco-deficit showed that the reservoir also changed the seasonal regime of the flows in autumn and winter. Annual extreme flows and October flows changes lead to negative ecological implications downstream from the Three Gorges Dam. Ecological operation for the Three Gorges Dam is necessary to mitigate the negative effects on the river ecosystem in the middle reach of Yangtze River. The framework proposed here could be a robust method to assess the cumulative impacts of reservoir operation.
Objective: A repeated measures study was used to assess the effect of work tasks on select proinflammatory biomarkers in firefighters working at prescribed burns. Methods: Ten firefighters and two volunteers were monitored for particulate matter and carbon monoxide on workdays, ...
The Impact of Missing Background Data on Subpopulation Estimation
ERIC Educational Resources Information Center
Rutkowski, Leslie
2011-01-01
Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…
Conditions for synchronization in Josephson-junction arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chernikov, A.A.; Schmidt, G.
An effective perturbation theoretical method has been developed to study the dynamics of Josephson Junction series arrays. It is shown that the inclusion of Junction capacitances, often ignored, has a significant impact on synchronization. Comparison of analytic with computational results over a wide range of parameters shows excellent agreement.
Numerical treatment for Carreau nanofluid flow over a porous nonlinear stretching surface
NASA Astrophysics Data System (ADS)
Eid, Mohamed R.; Mahny, Kasseb L.; Muhammad, Taseer; Sheikholeslami, Mohsen
2018-03-01
The impact of magnetic field and nanoparticles on the two-phase flow of a generalized non-Newtonian Carreau fluid over permeable non-linearly stretching surface has been analyzed in the existence of all suction/injection and thermal radiation. The governing PDEs with congruous boundary condition are transformed into a system of non-linear ODEs with appropriate boundary conditions by using similarity transformation. It solved numerically by using 4th-5th order Runge-Kutta-Fehlberg method based on shooting technique. The impacts of non-dimensional controlling parameters on velocity, temperature, and nanoparticles volume concentration profiles are scrutinized with aid of graphs. The Nusselt and the Sherwood numbers are studied at the different situations of the governing parameters. The numerical computations are in excellent consent with previously reported studies. It is found that the heat transfer rate is reduced with an increment of thermal radiation parameter and on contrary of the rising of magnetic field. The opposite trend happens in the mass transfer rate.
Aquatic environmental assessment of Lake Balaton in the light of physical-chemical water parameters.
Sebestyén, Vitkor; Németh, József; Juzsakova, Tatjana; Domokos, Endre; Kovács, Zsófia; Rédey, Ákos
2017-11-01
One of the issues of the Hungarian Water Management Strategy is the improvement and upgrading of the water of Lake Balaton. The Water Framework Directive (WFD) specifies and sets forth the achievement of the good ecological status. However, the assessment of the water quality of the lake as a complex system requires a comprehensive monitoring and evaluation procedure. Measurements were carried out around the Lake Balaton at ten different locations/sites and 13 physical-chemical parameters were monitored at each measurement site.For the interpretation of the water chemistry parameters the Aquatic Environmental Assessment (AEA) method devised by authors was used for the water body of the Lake Balaton. The AEA method can be used for all types of the water bodies since it is flexible and using individual weighting procedure for the water chemistry parameters comprehensive information can be obtain. The AEA method was compared with existing EIA methods according to a predefined criterion system and proved to be the most suitable tool for evaluating the environmental impacts in our study.On the basis of the results it can be concluded that the status of the quality of studied area on the Lake Balaton can be categorized as proper quality (from the outcome of the ten measurement sites this conclusion was reached at seven sites).
Electron impact ionization of cycloalkanes, aldehydes, and ketones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Dhanoj; Antony, Bobby, E-mail: bka.ism@gmail.com
The theoretical calculations of electron impact total ionization cross section for cycloalkane, aldehyde, and ketone group molecules are undertaken from ionization threshold to 2 keV. The present calculations are based on the spherical complex optical potential formalism and complex scattering potential ionization contribution method. The results of most of the targets studied compare fairly well with the recent measurements, wherever available and the cross sections for many targets are predicted for the first time. The correlation between the peak of ionization cross sections with number of target electrons and target parameters is also reported. It was found that the crossmore » sections at their maximum depend linearly with the number of target electrons and with other target parameters, confirming the consistency of the values reported here.« less
Reliability analysis of different structure parameters of PCBA under drop impact
NASA Astrophysics Data System (ADS)
Liu, P. S.; Fan, G. M.; Liu, Y. H.
2018-03-01
The establishing process of PCBA is modelled by finite element analysis software ABAQUS. Firstly, introduce the Input-G method and the fatigue life under drop impact are introduced and the mechanism of the solder joint failure in the process of drop is analysed. The main reason of solder joint failure is that the PCB component is suffering repeated tension and compression stress during the drop impact. Finally, the equivalent stress and peel stress of different solder joint and plate-level components under different impact acceleration are also analysed. The results show that the reliability of tin-silver copper joint is better than that of tin- lead solder joint, and the fatigue life of solder joint expectancy decrease as the impact pulse amplitude increases.
Dielectric elastomer for stretchable sensors: influence of the design and material properties
NASA Astrophysics Data System (ADS)
Jean-Mistral, C.; Iglesias, S.; Pruvost, S.; Duchet-Rumeau, J.; Chesné, S.
2016-04-01
Dielectric elastomers exhibit extended capabilities as flexible sensors for the detection of load distributions, pressure or huge deformations. Tracking the human movements of the fingers or the arms could be useful for the reconstruction of sporting gesture, or to control a human-like robot. Proposing new measurements methods are addressed in a number of publications leading to improving the sensitivity and accuracy of the sensing method. Generally, the associated modelling remains simple (RC or RC transmission line). The material parameters are considered constant or having a negligible effect which can lead to serious reduction of accuracy. Comparisons between measurements and modelling require care and skill, and could be tricky. Thus, we propose here a comprehensive modelling, taking into account the influence of the material properties on the performances of the dielectric elastomer sensor (DES). Various parameters influencing the characteristics of the sensors have been identified: dielectric constant, hyper-elasticity. The variations of these parameters as a function of the strain impact the linearity and sensitivity of the sensor of few percent. The sensitivity of the DES is also evaluated changing geometrical parameters (initial thickness) and its design (rectangular and dog-bone shapes). We discuss the impact of the shape regarding stress. Finally, DES including a silicone elastomer sandwiched between two high conductive stretchable electrodes, were manufactured and investigated. Classic and reliable LCR measurements are detailed. Experimental results validate our numerical model of large strain sensor (>50%).
Robust human body model injury prediction in simulated side impact crashes.
Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D
2016-01-01
This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.
NASA Astrophysics Data System (ADS)
Ayub, M.; Abbas, T.; Bhatti, M. M.
2016-06-01
The boundary layer flow of nanofluid that is electrically conducting over a Riga plate is considered. The Riga plate is an electromagnetic actuator which comprises a spanwise adjusted cluster of substituting terminal and lasting magnets mounted on a plane surface. The numerical model fuses the Brownian motion and the thermophoresis impacts because of the nanofluid and the Grinberg term for the wall parallel Lorentz force due to the Riga plate in the presence of slip effects. The numerical solution of the problem is presented using the shooting method. The novelties of all the physical parameters such as modified Hartmann number, Richardson number, nanoparticle concentration flux parameter, Prandtl number, Lewis number, thermophoresis parameter, Brownian motion parameter and slip parameter are demonstrated graphically. Numerical values of reduced Nusselt number, Sherwood number are discussed in detail.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Mo, Fuhao; Zhao, Siqi; Yu, Chuanhui; Duan, Shuyong
2018-01-01
The car front bumper system needs to meet the requirements of both pedestrian safety and low-speed impact which are somewhat contradicting. This study aims to design a new kind of modular self-adaptive energy absorber of the front bumper system which can balance the two performances. The X-shaped energy-absorbing structure was proposed which can enhance the energy absorption capacity during impact by changing its deformation mode based on the amount of external collision energy. Then, finite element simulations with a realistic vehicle bumper system are performed to demonstrate its crashworthiness in comparison with the traditional foam energy absorber, which presents a significant improvement of the two performances. Furthermore, the structural parameters of the X-shaped energy-absorbing structure including thickness (t u), side arc radius (R), and clamping boost beam thickness (t b) are analyzed using a full factorial method, and a multiobjective optimization is implemented regarding evaluation indexes of both pedestrian safety and low-speed impact. The optimal parameters are then verified, and the feasibility of the optimal results is confirmed. In conclusion, the new X-shaped energy absorber can meet both pedestrian safety and low-speed impact requirements well by altering the main deformation modes according to different impact energy levels. PMID:29581728
Mo, Fuhao; Zhao, Siqi; Yu, Chuanhui; Xiao, Zhi; Duan, Shuyong
2018-01-01
The car front bumper system needs to meet the requirements of both pedestrian safety and low-speed impact which are somewhat contradicting. This study aims to design a new kind of modular self-adaptive energy absorber of the front bumper system which can balance the two performances. The X-shaped energy-absorbing structure was proposed which can enhance the energy absorption capacity during impact by changing its deformation mode based on the amount of external collision energy. Then, finite element simulations with a realistic vehicle bumper system are performed to demonstrate its crashworthiness in comparison with the traditional foam energy absorber, which presents a significant improvement of the two performances. Furthermore, the structural parameters of the X-shaped energy-absorbing structure including thickness ( t u ), side arc radius ( R ), and clamping boost beam thickness ( t b ) are analyzed using a full factorial method, and a multiobjective optimization is implemented regarding evaluation indexes of both pedestrian safety and low-speed impact. The optimal parameters are then verified, and the feasibility of the optimal results is confirmed. In conclusion, the new X-shaped energy absorber can meet both pedestrian safety and low-speed impact requirements well by altering the main deformation modes according to different impact energy levels.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
A novel method for characterizing the impact response of functionally graded plates
NASA Astrophysics Data System (ADS)
Larson, Reid A.
Functionally graded material (FGM) plates are advanced composites with properties that vary continuously through the thickness of the plate. Metal-ceramic FGM plates have been proposed for use in thermal protection systems where a metal-rich interior surface of the plate gradually transitions to a ceramic-rich exterior surface of the plate. The ability of FGMs to resist impact loads must be demonstrated before using them in high-temperature environments in service. This dissertation presents a novel technique by which the impact response of FGM plates is characterized for low-velocity, low- to medium-energy impact loads. An experiment was designed where strain histories in FGM plates were collected during impact events. These strain histories were used to validate a finite element simulation of the test. A parameter estimation technique was developed to estimate local material properties in the anisotropic, non-homogenous FGM plates to optimize the finite element simulations. The optimized simulations captured the physics of the impact events. The method allows research & design engineers to make informed decisions necessary to implement FGM plates in aerospace platforms.
Analysing the 21 cm signal from the epoch of reionization with artificial neural networks
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; Semelin, Benoit
2017-07-01
The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
The Quality of Methods Reporting in Parasitology Experiments
Flórez-Vargas, Oscar; Bramhall, Michael; Noyes, Harry; Cruickshank, Sheena; Stevens, Robert; Brass, Andy
2014-01-01
There is a growing concern both inside and outside the scientific community over the lack of reproducibility of experiments. The depth and detail of reported methods are critical to the reproducibility of findings, but also for making it possible to compare and integrate data from different studies. In this study, we evaluated in detail the methods reporting in a comprehensive set of trypanosomiasis experiments that should enable valid reproduction, integration and comparison of research findings. We evaluated a subset of other parasitic (Leishmania, Toxoplasma, Plasmodium, Trichuris and Schistosoma) and non-parasitic (Mycobacterium) experimental infections in order to compare the quality of method reporting more generally. A systematic review using PubMed (2000–2012) of all publications describing gene expression in cells and animals infected with Trypanosoma spp was undertaken based on PRISMA guidelines; 23 papers were identified and included. We defined a checklist of essential parameters that should be reported and have scored the number of those parameters that are reported for each publication. Bibliometric parameters (impact factor, citations and h-index) were used to look for association between Journal and Author status and the quality of method reporting. Trichuriasis experiments achieved the highest scores and included the only paper to score 100% in all criteria. The mean of scores achieved by Trypanosoma articles through the checklist was 65.5% (range 32–90%). Bibliometric parameters were not correlated with the quality of method reporting (Spearman's rank correlation coefficient <−0.5; p>0.05). Our results indicate that the quality of methods reporting in experimental parasitology is a cause for concern and it has not improved over time, despite there being evidence that most of the assessed parameters do influence the results. We propose that our set of parameters be used as guidelines to improve the quality of the reporting of experimental infection models as a pre-requisite for integrating and comparing sets of data. PMID:25076044
The quality of methods reporting in parasitology experiments.
Flórez-Vargas, Oscar; Bramhall, Michael; Noyes, Harry; Cruickshank, Sheena; Stevens, Robert; Brass, Andy
2014-01-01
There is a growing concern both inside and outside the scientific community over the lack of reproducibility of experiments. The depth and detail of reported methods are critical to the reproducibility of findings, but also for making it possible to compare and integrate data from different studies. In this study, we evaluated in detail the methods reporting in a comprehensive set of trypanosomiasis experiments that should enable valid reproduction, integration and comparison of research findings. We evaluated a subset of other parasitic (Leishmania, Toxoplasma, Plasmodium, Trichuris and Schistosoma) and non-parasitic (Mycobacterium) experimental infections in order to compare the quality of method reporting more generally. A systematic review using PubMed (2000-2012) of all publications describing gene expression in cells and animals infected with Trypanosoma spp was undertaken based on PRISMA guidelines; 23 papers were identified and included. We defined a checklist of essential parameters that should be reported and have scored the number of those parameters that are reported for each publication. Bibliometric parameters (impact factor, citations and h-index) were used to look for association between Journal and Author status and the quality of method reporting. Trichuriasis experiments achieved the highest scores and included the only paper to score 100% in all criteria. The mean of scores achieved by Trypanosoma articles through the checklist was 65.5% (range 32-90%). Bibliometric parameters were not correlated with the quality of method reporting (Spearman's rank correlation coefficient <-0.5; p>0.05). Our results indicate that the quality of methods reporting in experimental parasitology is a cause for concern and it has not improved over time, despite there being evidence that most of the assessed parameters do influence the results. We propose that our set of parameters be used as guidelines to improve the quality of the reporting of experimental infection models as a pre-requisite for integrating and comparing sets of data.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
On the effect of model parameters on forecast objects
NASA Astrophysics Data System (ADS)
Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott
2018-04-01
Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map
. The field for some quantities generally consists of spatially coherent and disconnected objects
. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output
of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.
Impact of Physical Activity on Obesity and Lipid Profile of Adults with Intellectual Disability
ERIC Educational Resources Information Center
Gawlik, Krystyna; Zwierzchowska, Anna; Celebanska, Diana
2018-01-01
Introduction: This study assessed overweight, obesity and lipid profiles in adults with intellectual disability and compared these metrics with their physical activity. Materials and Method: Basic somatic parameters, lipid profile and weekly physical activity were examined in 27 adults with moderate intellectual disability. Chi-square independence…
Predicting lodgepole pine site index from climatic parameters in Alberta.
Robert A. Monserud; Shongming Huang; Yuqing Yang
2006-01-01
We sought to evaluate the impact of climatic variables on site productivity of lodgepole pine (Pinus contorta var. latifolia Engelm.) for the province of Alberta. Climatic data were obtained from the Alberta Climate Model, which is based on 30-year normals from the provincial weather station network. Mapping methods were based...
Doddridge, Greg D; Shi, Zhenqi
2015-01-01
Since near infrared spectroscopy (NIRS) was introduced to the pharmaceutical industry, efforts have been spent to leverage the power of chemometrics to extract out the best possible signal to correlate with the analyte of the interest. In contrast, only a few studies addressed the potential impact of instrument parameters, such as resolution and co-adds (i.e., the number of averaged replicate spectra), on the method performance of error statistics. In this study, a holistic approach was used to evaluate the effect of the instrument parameters of a FT-NIR spectrometer on the performance of a content uniformity method with respect to a list of figures of merit. The figures of merit included error statistics, signal-to-noise ratio (S/N), sensitivity, analytical sensitivity, effective resolution, selectivity, limit of detection (LOD), and noise. A Bruker MPA FT-NIR spectrometer was used for the investigation of an experimental design in terms of resolution (4 cm(-1) and 32 cm(-1)) and co-adds (256 and 16) plus a center point at 8 cm(-1) and 32 co-adds. Given the balance among underlying chemistry, instrument parameters, chemometrics, and measurement time, 8 cm(-1) and 32 co-adds in combination with appropriate 2nd derivative preprocessing was found to fit best for the intended purpose as a content uniformity method. The considerations for optimizing both instrument parameters and chemometrics were proposed and discussed in order to maximize the method performance for its intended purpose for future NIRS method development in R&D. Copyright © 2014 Elsevier B.V. All rights reserved.
Assessing the Value of Biosimilars: A Review of the Role of Budget Impact Analysis.
Simoens, Steven; Jacobs, Ira; Popovian, Robert; Isakov, Leah; Shane, Lesley G
2017-10-01
Biosimilar drugs are highly similar to an originator (reference) biologic, with no clinically meaningful differences in terms of safety or efficacy. As biosimilars offer the potential for lower acquisition costs versus the originator biologic, evaluating the economic implications of the introduction of biosimilars is of interest. Budget impact analysis (BIA) is a commonly used methodology. This review of published BIAs of biosimilar fusion proteins and/or monoclonal antibodies identified 12 unique publications (three full papers and nine congress posters). When evaluated alongside professional guidance on conducting BIA, the majority of BIAs identified were generally in line with international recommendations. However, a lack of peer-reviewed journal articles and considerable shortcomings in the publications were identified. Deficiencies included a limited range of cost parameters, a reliance on assumptions for parameters such as uptake and drug pricing, a lack of expert validation, and a limited range of sensitivity analyses that were based on arbitrary ranges. The rationale for the methods employed, limitations of the BIA approach, and instructions for local adaptation often were inadequately discussed. To understand fully the potential economic impact and value of biosimilars, the impact of biosimilar supply, manufacturer-provided supporting services, and price competition should be included in BIAs. Alternative approaches, such as cost minimization, which requires evidence demonstrating similarity to the originator biologic, and those that integrate a range of economic assessment methods, are needed to assess the value of biosimilars.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
NASA Astrophysics Data System (ADS)
Harmon, Michael; Gamba, Irene M.; Ren, Kui
2016-12-01
This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
An Impulse-Momentum Method for Calculating Landing-Gear Contact Conditions in Eccentric Landings
NASA Technical Reports Server (NTRS)
Yntema, Robert T; Milwitzky, Benjamin
1952-01-01
An impulse-momentum method for determining impact conditions for landing gears in eccentric landings is presented. The analysis is primarily concerned with the determination of contact velocities for impacts subsequent to initial touchdown in eccentric landings and with the determination of the effective mass acting on each landing gear. These parameters determine the energy-absorption requirements for the landing gear and, in conjunction with the particular characteristics of the landing gear, govern the magnitude of the ground loads. Changes in airplane angular and linear velocities and the magnitude of landing-gear vertical, drag, and side impulses resulting from a landing impact are determined by means of impulse-momentum relationships without the necessity for considering detailed force-time variations. The effective mass acting on each gear is also determined from the calculated landing-gear impulses. General equations applicable to any type of eccentric landing are written and solutions are obtained for the particular cases of an impact on one gear, a simultaneous impact on any two gears, and a symmetrical impact. In addition a solution is presented for a simplified two-degree-of-freedom system which allows rapid qualitative evaluation of the effects of certain principal parameters. The general analysis permits evaluation of the importance of such initial conditions at ground contact as vertical, horizontal, and side drift velocities, wing lift, roll and pitch angles, and rolling and pitching velocities, as well as the effects of such factors as landing gear location, airplane inertia, landing-gear length, energy-absorption efficiency, and wheel angular inertia on the severity of landing impacts. -A brief supplementary study which permits a limited evaluation of variable aerodynamic effects neglected in the analysis is presented in the appendix. Application of the analysis indicates that landing-gear impacts in eccentric landings can be appreciably more severe than impacts in symmetrical landings with the same sinking speed. The results also indicate the effects of landing-gear location, airplane inertia, initial wing lift, side drift velocity, attitude, and initial rolling velocity on the severity of both initial and subsequent landing-gear impacts. A comparison of the severity of impacts on auxiliary gears for tricycle and quadricycle configurations is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Jim; Flicker, Dawn; Ide, Kayo
2006-05-20
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from amore » single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.« less
Possible impact solutions of asteroid (99942) Apophis
NASA Astrophysics Data System (ADS)
Wlodarczyk, Ireneusz
2017-07-01
We computed impact solutions of the potentially dangerous asteroid (99942) Apophis based on 4469 optical observations from March 15.10789 UTC, 2004 through January 03.26308 UTC, 2015, and 20 radar observations from January 27.97986 UTC, 2005 through March 15.99931 UTC, 2013. However, we computed possible impact solutions by using the Line Of Variation method out to σ LOV = 5 computing 3000 virtual asteroids (VAs) on both sides of the LOV which gives 6001 VAs and propagated their orbits to JD 2495000.5 TDT=December 24, 2118. We computed the non-gravitational parameter A2=-5.586×10^{-14} au/d^{2} with 1-σ uncertainty 2.965×10^{-14} au/d^{2} and possible impacts until 2096. The possible impact corridor for 2068 is presented.
NASA Astrophysics Data System (ADS)
Mavroidis, Panayiotis; Lind, Bengt K.; Theodorou, Kyriaki; Laurell, Göran; Fernberg, Jan-Olof; Lefkopoulos, Dimitrios; Kappas, Constantin; Brahme, Anders
2004-08-01
The purpose of this work is to provide some statistical methods for evaluating the predictive strength of radiobiological models and the validity of dose-response parameters for tumour control and normal tissue complications. This is accomplished by associating the expected complication rates, which are calculated using different models, with the clinical follow-up records. These methods are applied to 77 patients who received radiation treatment for head and neck cancer and 85 patients who were treated for arteriovenous malformation (AVM). The three-dimensional dose distribution delivered to esophagus and AVM nidus and the clinical follow-up results were available for each patient. Dose-response parameters derived by a maximum likelihood fitting were used as a reference to evaluate their compatibility with the examined treatment methodologies. The impact of the parameter uncertainties on the dose-response curves is demonstrated. The clinical utilization of the radiobiological parameters is illustrated. The radiobiological models (relative seriality and linear Poisson) and the reference parameters are validated to prove their suitability in reproducing the treatment outcome pattern of the patient material studied (through the probability of finding a worse fit, area under the ROC curve and khgr2 test). The analysis was carried out for the upper 5 cm of the esophagus (proximal esophagus) where all the strictures are formed, and the total volume of AVM. The estimated confidence intervals of the dose-response curves appear to have a significant supporting role on their clinical implementation and use.
Quantifying Selection with Pool-Seq Time Series Data.
Taus, Thomas; Futschik, Andreas; Schlötterer, Christian
2017-11-01
Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.
Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E
2016-06-21
We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.
Time-varying parameter models for catchments with land use change: the importance of model structure
NASA Astrophysics Data System (ADS)
Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid
2018-05-01
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
NASA Astrophysics Data System (ADS)
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
NASA Astrophysics Data System (ADS)
Sajid, T.; Sagheer, M.; Hussain, S.; Bilal, M.
2018-03-01
The present article is about the study of Darcy-Forchheimer flow of Maxwell nanofluid over a linear stretching surface. Effects like variable thermal conductivity, activation energy, nonlinear thermal radiation is also incorporated for the analysis of heat and mass transfer. The governing nonlinear partial differential equations (PDEs) with convective boundary conditions are first converted into the nonlinear ordinary differential equations (ODEs) with the help of similarity transformation, and then the resulting nonlinear ODEs are solved with the help of shooting method and MATLAB built-in bvp4c solver. The impact of different physical parameters like Brownian motion, thermophoresis parameter, Reynolds number, magnetic parameter, nonlinear radiative heat flux, Prandtl number, Lewis number, reaction rate constant, activation energy and Biot number on Nusselt number, velocity, temperature and concentration profile has been discussed. It is viewed that both thermophoresis parameter and activation energy parameter has ascending effect on the concentration profile.
Ahn, Jae Joon; Kim, Young Min; Yoo, Keunje; Park, Joonhong; Oh, Kyong Joo
2012-11-01
For groundwater conservation and management, it is important to accurately assess groundwater pollution vulnerability. This study proposed an integrated model using ridge regression and a genetic algorithm (GA) to effectively select the major hydro-geological parameters influencing groundwater pollution vulnerability in an aquifer. The GA-Ridge regression method determined that depth to water, net recharge, topography, and the impact of vadose zone media were the hydro-geological parameters that influenced trichloroethene pollution vulnerability in a Korean aquifer. When using these selected hydro-geological parameters, the accuracy was improved for various statistical nonlinear and artificial intelligence (AI) techniques, such as multinomial logistic regression, decision trees, artificial neural networks, and case-based reasoning. These results provide a proof of concept that the GA-Ridge regression is effective at determining influential hydro-geological parameters for the pollution vulnerability of an aquifer, and in turn, improves the AI performance in assessing groundwater pollution vulnerability.
Potential impacts of climate change on water quality in a shallow reservoir in China.
Zhang, Chen; Lai, Shiyu; Gao, Xueping; Xu, Liping
2015-10-01
To study the potential effects of climate change on water quality in a shallow reservoir in China, the field data analysis method is applied to data collected over a given monitoring period. Nine water quality parameters (water temperature, ammonia nitrogen, nitrate nitrogen, nitrite nitrogen, total nitrogen, total phosphorus, chemical oxygen demand, biochemical oxygen demand and dissolved oxygen) and three climate indicators for 20 years (1992-2011) are considered. The annual trends exhibit significant trends with respect to certain water quality and climate parameters. Five parameters exhibit significant seasonality differences in the monthly means between the two decades (1992-2001 and 2002-2011) of the monitoring period. Non-parametric regression of the statistical analyses is performed to explore potential key climate drivers of water quality in the reservoir. The results indicate that seasonal changes in temperature and rainfall may have positive impacts on water quality. However, an extremely cold spring and high wind speed are likely to affect the self-stabilising equilibrium states of the reservoir, which requires attention in the future. The results suggest that land use changes have important impact on nitrogen load. This study provides useful information regarding the potential effects of climate change on water quality in developing countries.
NASA Astrophysics Data System (ADS)
Xu, Xueping; Han, Qinkai; Chu, Fulei
2018-03-01
The electromagnetic vibration of electrical machines with an eccentric rotor has been extensively investigated. However, magnetic saturation was often neglected. Moreover, the rub impact between the rotor and stator is inevitable when the amplitude of the rotor vibration exceeds the air-gap. This paper aims to propose a general electromagnetic excitation model for electrical machines. First, a general model which takes the magnetic saturation and rub impact into consideration is proposed and validated by the finite element method and reference. The dynamic equations of a Jeffcott rotor system with electromagnetic excitation and mass imbalance are presented. Then, the effects of pole-pair number and rubbing parameters on vibration amplitude are studied and approaches restraining the amplitude are put forward. Finally, the influences of mass eccentricity, resultant magnetomotive force (MMF), stiffness coefficient, damping coefficient, contact stiffness and friction coefficient on the stability of the rotor system are investigated through the Floquet theory, respectively. The amplitude jumping phenomenon is observed in a synchronous generator for different pole-pair numbers. The changes of design parameters can alter the stability states of the rotor system and the range of parameter values forms the zone of stability, which lays helpful suggestions for the design and application of the electrical machines.
Koçyiğit, Burhan Fatih; Gür, Ali; Altındağ, Özlem; Akyol, Ahmet; Gürsoy, Savaş
2016-04-01
Fibromyalgia is a disease characterized by chronic, widespread pain. Pharmacological and non-pharmacological treatment methods are used. The aim of the present study was to determine the effect of balneotherapy on treatment of fibromyalgia syndrome, compared with education alone. A total of 66 patients diagnosed with fibromyalgia syndrome were randomly separated into balneotherapy and control groups. Patients in both groups were informed about fibromyalgia syndrome. In addition, the balneotherapy group received 21 sessions of spa treatment with 34.8 °C thermomineral water, attending the spa 5 days a week. Patients were evaluated by visual analogue scale, tender point count, fibromyalgia impact questioning, and modified fatigue impact scale at initiation of treatment on the 15th day, 1st month, 3rd month, and 6th month. Evaluations were performed by the same doctor. Statistically significant improvement was detected in all parameters, compared to starting evaluation, in both groups. Most improved results among all parameters were observed in the balneotherapy group on the first 3-month follow-up. In addition, all parameters beyond tender point count and modified fatigue impact were improved on 6-month follow-up. It was concluded that addition of balneotherapy to patient education has both short- and long-term beneficial effects on female patients with fibromyalgia.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
Empirical estimation of school siting parameter towards improving children's safety
NASA Astrophysics Data System (ADS)
Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.
2014-02-01
Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
Impact of brown adipose tissue on body fatness and glucose metabolism in healthy humans.
Matsushita, M; Yoneshiro, T; Aita, S; Kameya, T; Sugie, H; Saito, M
2014-06-01
Brown adipose tissue (BAT) is involved in the regulation of whole-body energy expenditure and adiposity. Some clinical studies have reported an association between BAT and blood glucose in humans. To examine the impact of BAT on glucose metabolism, independent of that of body fatness, age and sex in healthy adult humans. Two hundred and sixty healthy volunteers (184 males and 76 females, 20-72 years old) underwent fluorodeoxyglucose-positron emission tomography and computed tomography after 2 h of cold exposure to assess maximal BAT activity. Blood parameters including glucose, HbA1c and low-density lipoprotein (LDL)/high-density lipoprotein-cholesterol were measured by conventional methods, and body fatness was estimated from body mass index (BMI), body fat mass and abdominal fat area. The impact of BAT on body fatness and blood parameters was determined by logistic regression with the use of univariate and multivariate models. Cold-activated BAT was detected in 125 (48%) out of 260 subjects. When compared with subjects without detectable BAT, those with detectable BAT were younger and showed lower adiposity-related parameters such as the BMI, body fat mass and abdominal fat area. Although blood parameters were within the normal range in the two subject groups, HbA1c, total cholesterol and LDL-cholesterol were significantly lower in the BAT-positive group. Blood glucose also tended to be lower in the BAT-positive group. Logistic regression demonstrated that BAT, in addition to age and sex, was independently associated with BMI, body fat mass, and abdominal visceral and subcutaneous fat areas. For blood parameters, multivariate analysis after adjustment for age, sex and body fatness revealed that BAT was a significantly independent determinant of glucose and HbA1c. BAT, independent of age, sex and body fatness, has a significant impact on glucose metabolism in adult healthy humans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, X; Yi, J; Xie, C
Purpose: To evaluate the impact of complexity indices on the plan quality and deliverability of volumetric modulated arc therapy (VMAT), and to determine the most significant parameters in the generation of an ideal VMAT plan. Methods: A multi-dimensional exploratory statistical method, canonical correlation analysis (CCA) was adopted to study the correlations between VMAT parameters of complexity, quality and deliverability, as well as their contribution weights with 32 two-arc VMAT nasopharyngeal cancer (NPC) patients and 31 one-arc VMAT prostate cancer patients. Results: The MU per arc (MU/Arc) and MU per control point (MU/CP) of NPC were 337.8±25.2 and 3.7±0.3, respectively, whichmore » were significantly lower than those of prostate cancer patients (MU/Arc : 506.9±95.4, MU/CP : 5.6±1.1). The plan complexity indices indicated that two-arc VMAT plans were more complex than one-arc VMAT plans. Plan quality comparison confirmed that one-arc VMAT plans had a high quality than two-arc VMAT plans. CCA results implied that plan complexity parameters were highly correlated with plan quality with the first two canonical correlations of 0.96, 0.88 (both p<0.001) and significantly correlated with deliverability with the first canonical correlation of 0.79 (p<0.001), plan quality and deliverability was also correlated with the first canonical correlation of 0.71 (p=0.02). Complexity parameters of MU/CP, segment area (SA) per CP, percent of MU/CP less 3 and planning target volume (PTV) were weighted heavily in correlation with plan quality and deliveability . Similar results obtained from individual NPC and prostate CCA analysis. Conclusion: Relationship between complexity, quality, and deliverability parameters were investigated with CCA. MU, SA related parameters and PTV volume were found to have strong effect on the plan quality and deliverability. The presented correlation among different quantified parameters could be used to improve the plan quality and the efficiency of the radiotherapy process when creating a complex VMAT plan.« less
A method for analyzing clustered interval-censored data based on Cox's model.
Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau
2013-02-28
Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.
Small scale green infrastructure design to meet different urban hydrological criteria.
Jia, Z; Tang, S; Luo, W; Li, S; Zhou, M
2016-04-15
As small scale green infrastructures, rain gardens have been widely advocated for urban stormwater management in the contemporary low impact development (LID) era. This paper presents a simple method that consists of hydrological models and the matching plots of nomographs to provide an informative and practical tool for rain garden sizing and hydrological evaluation. The proposed method considers design storms, infiltration rates and the runoff contribution area ratio of the rain garden, allowing users to size a rain garden for a specific site with hydrological reference and predict overflow of the rain garden under different storms. The nomographs provide a visual presentation on the sensitivity of different design parameters. Subsequent application of the proposed method to a case study conducted in a sub-humid region in China showed that, the method accurately predicted the design storms for the existing rain garden, the predicted overflows under large storm events were within 13-50% of the measured volumes. The results suggest that the nomographs approach is a practical tool for quick selection or assessment of design options that incorporate key hydrological parameters of rain gardens or other infiltration type green infrastructure. The graphic approach as displayed by the nomographs allow urban planners to demonstrate the hydrological effect of small scale green infrastructure and gain more support for promoting low impact development. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Online estimation of the wavefront outer scale profile from adaptive optics telemetry
NASA Astrophysics Data System (ADS)
Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.
2017-02-01
We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.
The interplay between QSAR/QSPR studies and partial order ranking and formal concept analyses.
Carlsen, Lars
2009-04-17
The often observed scarcity of physical-chemical and well as toxicological data hampers the assessment of potentially hazardous chemicals released to the environment. In such cases Quantitative Structure-Activity Relationships/Quantitative Structure-Property Relationships (QSAR/QSPR) constitute an obvious alternative for rapidly, effectively and inexpensively generatng missing experimental values. However, typically further treatment of the data appears necessary, e.g., to elucidate the possible relations between the single compounds as well as implications and associations between the various parameters used for the combined characterization of the compounds under investigation. In the present paper the application of QSAR/QSPR in combination with Partial Order Ranking (POR) methodologies will be reviewed and new aspects using Formal Concept Analysis (FCA) will be introduced. Where POR constitutes an attractive method for, e.g., prioritizing a series of chemical substances based on a simultaneous inclusion of a range of parameters, FCA gives important information on the implications associations between the parameters. The combined approach thus constitutes an attractive method to a preliminary assessment of the impact on environmental and human health by primary pollutants or possibly by a primary pollutant well as a possible suite of transformation subsequent products that may be both persistent in and bioaccumulating and toxic. The present review focus on the environmental - and human health impact by residuals of the rocket fuel 1,1-dimethylhydrazine (heptyl) and its transformation products as an illustrative example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myagkov, N. N., E-mail: nn-myagkov@mail.ru
The problem of aluminum projectile fragmentation upon high-velocity impact on a thin aluminum shield is considered. A distinctive feature of this description is that the fragmentation has been numerically simulated using the complete system of equations of deformed solid mechanics by a method of smoothed particle hydrodynamics in three-dimensional setting. The transition from damage to fragmentation is analyzed and scaling relations are derived in terms of the impact velocity (V), ratio of shield thickness to projectile diameter (h/D), and ultimate strength (σ{sub p}) in the criterion of projectile and shield fracture. Analysis shows that the critical impact velocity V{sub c}more » (separating the damage and fragmentation regions) is a power function of σ{sub p} and h/D. In the supercritical region (V > V{sub c}), the weight-average fragment mass asymptotically tends to a power function of the impact velocity with exponent independent of h/D and σ{sub p}. Mean cumulative fragment mass distributions at the critical point are scale-invariant with respect to parameters h/D and σ{sub p}. Average masses of the largest fragments are also scale-invariant at V > V{sub c}, but only with respect to variable parameter σ{sub p}.« less
2013-01-01
Background Youth with serious mental illness may experience improved psychiatric stability with second generation antipsychotic (SGA) medication treatment, but unfortunately may also experience unhealthy weight gain adverse events. Research on weight loss strategies for youth who require ongoing antipsychotic treatment is quite limited. The purpose of this paper is to present the design, methods, and rationale of the Improving Metabolic Parameters in Antipsychotic Child Treatment (IMPACT) study, a federally funded, randomized trial comparing two pharmacologic strategies against a control condition to manage SGA-related weight gain. Methods The design and methodology considerations of the IMPACT trial are described and embedded in a description of health risks associated with antipsychotic-related weight gain and the limitations of currently available research. Results The IMPACT study is a 4-site, six month, randomized, open-label, clinical trial of overweight/obese youth ages 8–19 years with pediatric schizophrenia-spectrum and bipolar-spectrum disorders, psychotic or non-psychotic major depressive disorder, or irritability associated with autistic disorder. Youth who have experienced clinically significant weight gain during antipsychotic treatment in the past 3 years are randomized to either (1) switch antipsychotic plus healthy lifestyle education (HLE); (2) add metformin plus HLE; or (3) HLE with no medication change. The primary aim is to compare weight change (body mass index z-scores) for each pharmacologic intervention with the control condition. Key secondary assessments include percentage body fat, insulin resistance, lipid profile, psychiatric symptom stability (monitored independently by the pharmacotherapist and a blinded evaluator), and all-cause and specific cause discontinuation. This study is ongoing, and the targeted sample size is 132 youth. Conclusion Antipsychotic-related weight gain is an important public health issue for youth requiring ongoing antipsychotic treatment to maintain psychiatric stability. The IMPACT study provides a model for pediatric research on adverse event management using state-of-the art methods. The results of this study will provide needed data on risks and benefits of two pharmacologic interventions that are already being used in pediatric clinical settings but that have not yet been compared directly in randomized trials. Trial registration Clinical Trials.gov NCT00806234 PMID:23947389
Ethics Requirement Score: new tool for evaluating ethics in publications
dos Santos, Lígia Gabrielle; Fonseca, Ana Carolina da Costa e; Bica, Claudia Giuliano
2014-01-01
Objective To analyze ethical standards considered by health-related scientific journals, and to prepare the Ethics Requirement Score, a bibliometric index to be applied to scientific healthcare journals in order to evaluate criteria for ethics in scientific publication. Methods Journals related to healthcare selected by the Journal of Citation Reports™ 2010 database were considered as experimental units. Parameters related to publication ethics were analyzed for each journal. These parameters were acquired by analyzing the author’s guidelines or instructions in each journal website. The parameters considered were approval by an Internal Review Board, Declaration of Helsinki or Resolution 196/96, recommendations on plagiarism, need for application of Informed Consent Forms with the volunteers, declaration of confidentiality of patients, record in the database for clinical trials (if applicable), conflict of interest disclosure, and funding sources statement. Each item was analyzed considering their presence or absence. Result The foreign journals had a significantly higher Impact Factor than the Brazilian journals, however, no significant results were observed in relation to the Ethics Requirement Score. There was no correlation between the Ethics Requirement Score and the Impact Factor. Conclusion Although the Impact Factor of foreigner journals was considerably higher than that of the Brazilian publications, the results showed that the Impact Factor has no correlation with the proposed score. This allows us to state that the ethical requirements for publication in biomedical journals are not related to the comprehensiveness or scope of the journal. PMID:25628189
2012-01-01
Background Poor sperm quality can negatively affect embryonic development and IVF outcome. This study is aimed at investigating the influence of various lifestyle factors on semen quality according to MSOME (motile sperm organelle morphology examination) criteria. Methods 1683 male patients undergoing assisted reproductive technologies (ART) in our clinic were surveyed about their age, BMI (body mass index), ejaculation frequency, nutrition, sports, sleeping habits and social behavior. Semen samples were collected and evaluation of semen parameters according to MSOME and WHO criteria was performed. Results were grouped and statistically analyzed. Results Although single parameters had minor effects on sperm parameter, the combination of age, BMI, coffee intake, ejaculatory frequency and duration of sexual abstinence were identified as factors having a negative effect on sperm motility. Additionally, we could demonstrate that MSOME quality was reduced. The negative impact of age, BMI and coffee intake on sperm quality could be compensated if patients had a high ejaculation frequency and shorter periods of sexual abstinence. Conclusions Combinations of adverse lifestyle factors could have a detrimental impact on sperm, not only in terms of motility and sperm count but also in terms of sperm head vacuolization. This negative impact was shown to be compensated by higher ejaculation frequency and a shorter period of sexual abstinence. The compensation is most likely due to a shorter storage time in the male gonads, thus reducing the duration of sperms’ exposure to reactive oxygen species (ROS). PMID:23265183
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H.; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter. PMID:24608594
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter.
Testing a new application for TOPSIS: monitoring drought and wet periods in Iran
NASA Astrophysics Data System (ADS)
Roshan, Gholamreza; Ghanghermeh, AbdolAzim; Grab, Stefan W.
2018-01-01
Globally, droughts are a recurring major natural disaster owing to below normal precipitation, and are occasionally associated with high temperatures, which together negatively impact upon human health and social, economic, and cultural activities. Drought early warning and monitoring is thus essential for reducing such potential impacts on society. To this end, several experimental methods have previously been proposed for calculating drought, yet these are based almost entirely on precipitation alone. Here, for the first time, and in contrast to previous studies, we use seven climate parameters to establish drought/wet periods; these include: T min, T max, sunshine hours, relative humidity, average rainfall, number of rain days greater than 1 mm, and the ratio of total precipitation to number of days with precipitation, using the technique for order of preference by similarity to ideal solution (TOPSIS) algorithm. To test the TOPSIS method for different climate zones, six sample stations representing a variety of different climate conditions were used by assigning weight changes to climate parameters, which are then applied to the model, together with multivariate regression analysis. For the six stations tested, model results indicate the lowest errors for Zabol station and maximum errors for Kermanshah. The validation techniques strongly support our proposed new method for calculating and rating drought/wet events using TOPSIS.
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-01
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice. PMID:25594592
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-14
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.
Critical Deposition Condition of CoNiCrAlY Cold Spray Based on Particle Deformation Behavior
NASA Astrophysics Data System (ADS)
Ichikawa, Yuji; Ogawa, Kazuhiro
2017-02-01
Previous research has demonstrated deposition of MCrAlY coating via the cold spray process; however, the deposition mechanism of cold spraying has not been clearly explained—only empirically described by impact velocity. The purpose of this study was to elucidate the critical deposit condition. Microscale experimental measurements of individual particle deposit dimensions were incorporated with numerical simulation to investigate particle deformation behavior. Dimensional parameters were determined from scanning electron microscopy analysis of focused ion beam-fabricated cross sections of deposited particles to describe the deposition threshold. From Johnson-Cook finite element method simulation results, there is a direct correlation between the dimensional parameters and the impact velocity. Therefore, the critical velocity can describe the deposition threshold. Moreover, the maximum equivalent plastic strain is also strongly dependent on the impact velocity. Thus, the threshold condition required for particle deposition can instead be represented by the equivalent plastic strain of the particle and substrate. For particle-substrate combinations of similar materials, the substrate is more difficult to deform. Thus, this study establishes that the dominant factor of particle deposition in the cold spray process is the maximum equivalent plastic strain of the substrate, which occurs during impact and deformation.
Analysis of e-beam impact on the resist stack in e-beam lithography process
NASA Astrophysics Data System (ADS)
Indykeiwicz, K.; Paszkiewicz, B.
2013-07-01
Paper presents research on the sub-micron gate, AlGaN /GaN HEMT type transistors, fabrication by e-beam lithography and lift-off technique. The impact of the electron beam on the resists layer and the substrate was analyzed by MC method in Casino v3.2 software. The influence of technological process parameters on the metal structures resolution and quality for paths 100 nm, 300 nm and 500 nm wide and 20 μm long was studied. Qualitative simulation correspondences to the conducted experiments were obtained.
Yannis, George; Laiou, Alexandra; Papantoniou, Panagiotis; Christoforou, Charalambos
2014-06-01
This research aims to investigate the impact of texting on the behavior and safety of young drivers on urban and rural roads. A driving simulator experiment was carried out in which 34 young participants drove in different driving scenarios; specifically, driving in good weather, in raining conditions, in daylight and in night were examined. Lognormal regression methods were used to investigate the influence of texting as well as various other parameters on the mean speed and mean reaction time. Binary logistic methods were used to investigate the influence of texting use as well as various other parameters in the probability of an accident. It appears that texting leads to statistically significant decrease of the mean speed and increase of the mean reaction time in urban and rural road environment. Simultaneously, it leads to an increased accident probability due to driver distraction and delayed reaction at the moment of the incident. It appeared that drivers using mobile phones with a touch screen present different driving behavior with respect to their speed, however, they had an even higher probability of being involved in an accident. The analysis of the distracted driving performance of drivers who are texting while driving may allow for the identification of measures for the improvement of driving performance (e.g., restrictive measures, training and licensing, information campaigns). The identification of some of the parameters that have an impact on the behavior and safety of young drivers concerning texting and the consequent results can be exploited by policy decision makers in future efforts for the improvement of road safety. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sun, Rubao; An, Daizhi; Lu, Wei; Shi, Yun; Wang, Lili; Zhang, Can; Zhang, Ping; Qi, Hongjuan; Wang, Qiang
2016-02-01
In this study, we present a method for identifying sources of water pollution and their relative contributions in pollution disasters. The method uses a combination of principal component analysis and factor analysis. We carried out a case study in three rural villages close to Beijing after torrential rain on July 21, 2012. Nine water samples were analyzed for eight parameters, namely turbidity, total hardness, total dissolved solids, sulfates, chlorides, nitrates, total bacterial count, and total coliform groups. All of the samples showed different degrees of pollution, and most were unsuitable for drinking water as concentrations of various parameters exceeded recommended thresholds. Principal component analysis and factor analysis showed that two factors, the degree of mineralization and agricultural runoff, and flood entrainment, explained 82.50% of the total variance. The case study demonstrates that this method is useful for evaluating and interpreting large, complex water-quality data sets.
Experimental study of nonlinear ultrasonic behavior of soil materials during the compaction.
Chen, Jun; Wang, Hao; Yao, Yangping
2016-07-01
In this paper, the nonlinear ultrasonic behavior of unconsolidated granular medium - soil during the compaction is experimentally studied. The second harmonic generation technique is adopted to investigate the change of microstructural void in materials during the compaction process of loose soils. The nonlinear parameter is measured with the change of two important environmental factors i.e. moisture content and impact energy of compaction. It is found the nonlinear parameter of soil material presents a similar variation pattern with the void ratio of soil samples, corresponding to the increased moisture content and impact energy. A same optimum moisture content is found by observing the variation of nonlinear parameter and void ratio with respect to moisture content. The results indicate that the unconsolidated soil is manipulated by a strong material nonlinearity during the compaction procedure. The developed experimental technique based on the second harmonic generation could be a fast and convenient testing method for the determination of optimum moisture content of soil materials, which is very useful for the better compaction effect of filled embankment for civil infrastructures in-situ. Copyright © 2016 Elsevier B.V. All rights reserved.
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
USDA-ARS?s Scientific Manuscript database
Boneless chicken breast fillets (pectoralis major) and tenderloins (pectoralis minor) are common poultry products in retail markets and are used extensively by restaurants and food service. Texture quality of these products could be impacted by poultry processing methods and parameters. Effects of c...
Watershed-based Morphometric Analysis: A Review
NASA Astrophysics Data System (ADS)
Sukristiyanti, S.; Maria, R.; Lestiana, H.
2018-02-01
Drainage basin/watershed analysis based on morphometric parameters is very important for watershed planning. Morphometric analysis of watershed is the best method to identify the relationship of various aspects in the area. Despite many technical papers were dealt with in this area of study, there is no particular standard classification and implication of each parameter. It is very confusing to evaluate a value of every morphometric parameter. This paper deals with the meaning of values of the various morphometric parameters, with adequate contextual information. A critical review is presented on each classification, the range of values, and their implications. Besides classification and its impact, the authors also concern about the quality of input data, either in data preparation or scale/the detail level of mapping. This review paper hopefully can give a comprehensive explanation to assist the upcoming research dealing with morphometric analysis.
A Method to Assess Flux Hazards at CSP Plants to Reduce Avian Mortality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternativemore » heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.« less
A method to assess flux hazards at CSP plants to reduce avian mortality
NASA Astrophysics Data System (ADS)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke; Yellowhair, Julius
2017-06-01
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternative heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.
NASA Astrophysics Data System (ADS)
Farhat, I. A. H.; Gale, E.; Alpha, C.; Isakovic, A. F.
2017-07-01
Optimizing energy performance of Magnetic Tunnel Junctions (MTJs) is the key for embedding Spin Transfer Torque-Random Access Memory (STT-RAM) in low power circuits. Due to the complex interdependencies of the parameters and variables of the device operating energy, it is important to analyse parameters with most effective control of MTJ power. The impact of threshold current density, Jco , on the energy and the impact of HK on Jco are studied analytically, following the expressions that stem from Landau-Lifshitz-Gilbert-Slonczewski (LLGS-STT) model. In addition, the impact of other magnetic material parameters, such as Ms , and geometric parameters such as tfree and λ is discussed. Device modelling study was conducted to analyse the impact at the circuit level. Nano-magnetism simulation based on NMAGTM package was conducted to analyse the impact of controlling HK on the switching dynamics of the film.
Impact erosion prediction using the finite volume particle method with improved constitutive models
NASA Astrophysics Data System (ADS)
Leguizamón, Sebastián; Jahanbakhsh, Ebrahim; Maertens, Audrey; Vessaz, Christian; Alimirzazadeh, Siamak; Avellan, François
2016-11-01
Erosion damage in hydraulic turbines is a common problem caused by the high- velocity impact of small particles entrained in the fluid. In this investigation, the Finite Volume Particle Method is used to simulate the three-dimensional impact of rigid spherical particles on a metallic surface. Three different constitutive models are compared: the linear strainhardening (L-H), Cowper-Symonds (C-S) and Johnson-Cook (J-C) models. They are assessed in terms of the predicted erosion rate and its dependence on impact angle and velocity, as compared to experimental data. It has been shown that a model accounting for strain rate is necessary, since the response of the material is significantly tougher at the very high strain rate regime caused by impacts. High sensitivity to the friction coefficient, which models the cutting wear mechanism, has been noticed. The J-C damage model also shows a high sensitivity to the parameter related to triaxiality, whose calibration appears to be scale-dependent, not exclusively material-determined. After calibration, the J-C model is capable of capturing the material's erosion response to both impact velocity and angle, whereas both C-S and L-H fail.
Impact of Processing Method on Recovery of Bacteria from Wipes Used in Biological Surface Sampling
Olson, Nathan D.; Filliben, James J.; Morrow, Jayne B.
2012-01-01
Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense. PMID:22706055
NASA Astrophysics Data System (ADS)
Ma, Wen; Liu, Fushun
Voids are inevitable in the fabrication of fiber reinforced composites and have a detrimental impact on mechanical properties of composites. Different void contents were acquired by applying different vacuum bag pressures. Ultrasonic inspection and ablation density method were adopted to measure the ultrasonic characteristic parameters and average porosity, the characterization of voids' distribution, shape and size were carried out through metallographic analysis. Effects of void content on the tensile, flexural and interlaminar shear properties and the ultrasonic characteristic parameters were discussed. The results showed that, as vacuum bag pressure went from -50kPa to -98kPa, the voids content decreased from 4.36 to 0.34, the ultrasonic attenuation coefficient decreased, but the mechanical strengths all increased.
NASA Astrophysics Data System (ADS)
Ramzan, M.; Bilal, M.; Kanwal, Shamsa; Chung, Jae Dong
2017-06-01
Present analysis discusses the boundary layer flow of Eyring Powell nanofluid past a constantly moving surface under the influence of nonlinear thermal radiation. Heat and mass transfer mechanisms are examined under the physically suitable convective boundary condition. Effects of variable thermal conductivity and chemical reaction are also considered. Series solutions of all involved distributions using Homotopy Analysis method (HAM) are obtained. Impacts of dominating embedded flow parameters are discussed through graphical illustrations. It is observed that thermal radiation parameter shows increasing tendency in relation to temperature profile. However, chemical reaction parameter exhibits decreasing behavior versus concentration distribution. Supported by the World Class 300 Project (No. S2367878) of the SMBA (Korea)
Plasma measurement by optical visualization and triple probe method under high-speed impact
NASA Astrophysics Data System (ADS)
Sakai, T.; Umeda, K.; Kinoshita, S.; Watanabe, K.
2017-02-01
High-speed impact on spacecraft by space debris poses a threat. When a high-speed projectile collides with target, it is conceivable that the heat created by impact causes severe damage at impact point. Investigation of the temperature is necessary for elucidation of high-speed impact phenomena. However, it is very difficult to measure the temperature with standard methods for two main reasons. One reason is that a thermometer placed on the target is instantaneously destroyed upon impact. The other reason is that there is not enough time resolution to measure the transient temperature changes. In this study, the measurement of plasma induced by high-speed impact was investigated to estimate temperature changes near the impact point. High-speed impact experiments were performed with a vertical gas gun. The projectile speed was approximately 700 m/s, and the target material was A5052. The experimental data to calculate the plasma parameters of electron temperature and electron density were measured by triple probe method. In addition, the diffusion behavior of plasma was observed by optical visualization technique using high-speed camera. The frame rate and the exposure time were 260 kfps and 1.0 μs, respectively. These images are considered to be one proof to show the validity of plasma measurement. The experimental results showed that plasma signals were detected for around 70 μs, and the rising phase of the wave form was in good agreement with timing of optical visualization image when the plasma arrived at the tip of triple probe.
Optimization and application of blasting parameters based on the "pushing-wall" mechanism
NASA Astrophysics Data System (ADS)
Ren, Feng-yu; Sow, Thierno Amadou Mouctar; He, Rong-xing; Liu, Xin-rui
2012-10-01
The large structure parameter of a sublevel caving method was used in Beiminghe iron mine. The ores were generally lower than the medium hardness and easy to be drilled and blasted. However, the questions of boulder yield, "pushing-wall" accident rate, and brow damage rate were not effectively controlled in practical blasting. The model test of a similar material shows that the charge concentration of bottom blastholes in the sector is too high; the pushing wall is the fundamental reason for the poor blasting effect. One of the main methods to adjust the explosive distribution is to increase the length of charged blastholes. Therefore, the field tests with respect to increasing the length of uncharged blastholes were made in 12# stope of -95 subsection and 6# stope of Beiminghe iron mine. This paper took the test result of 12# stope as an example to analyze the impact of charge structure on blasting effect and design an appropriate blasting parameter that is to similar to No.12 stope.
Electronic propensity rules in Li-H+ collisions involving initial and/or final oriented states
NASA Astrophysics Data System (ADS)
Salas, P. J.
2000-12-01
Electronic excitation and capture processes are studied in collisions involving systems with only one active electron such as the alkaline (Li)-proton in the medium-energy region (0.1-15 keV). Using the semiclassical impact parameter method, the probabilities and the orientation parameter are calculated for transitions between initial and/or final oriented states. The results show a strong asymmetry in the probabilities depending on the orientation of the initial and/or final states. An intuitive view of the processes, by means of the concepts of propensity and velocity matching rules, is provided.
Jaciw, Andrew P; Lin, Li; Ma, Boya
2016-10-18
Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Omidvar, K.
1971-01-01
Expressions for the excitation cross section of the highly excited states of the hydrogenlike atoms by fast charged particles have been derived in the dipole approximation of the semiclassical impact parameter and the Born approximations, making use of a formula for the asymptotic expansion of the oscillator strength of the hydrogenlike atoms given by Menzel. When only the leading term in the asymptotic expansion is retained, the expression for the cross section becomes identical to the expression obtained by the method of the classical collision and correspondence principle given by Percival and Richards. Comparisons are made between the Bethe coefficients obtained here and the Bethe coefficients of the Born approximation for transitions where the Born calculation is available. Satisfactory agreement is obtained only for n yields n + 1 transitions, with n the principal quantum number of the excited state.
Tool and Method for Testing the Resistance of the Snow Road Cover to Destruction
NASA Astrophysics Data System (ADS)
Zhelykevich, R.; Lysyannikov, A.; Kaiser, Yu; Serebrenikova, Yu; Lysyannikova, N.; Shram, V.; Kravtsova, Ye; Plakhotnikova, M.
2016-06-01
The paper presents the design of the tool for efficient determination of the hardness of the snow road coating. The tool increases vertical positioning of the rod with the tip through replacement of the rod slide friction of the ball element by roll friction of its outer bearing race in order to enhance the accuracy of determining the hardness of the snow-ice road covering. A special feature of the tool consists in possibility of creating different impact energy by the change of the lifting height of the rod with the tip (indenter) and the exchangeable load mass. This allows the study of the influence of the tip shape and the impact energy on the snow strength parameters in a wide range, extends the scope of application of the durometer and makes possible to determine the strength of snow-ice formations by indenters with various geometrical parameters depending on climatic conditions.
NASA Astrophysics Data System (ADS)
Slavata, Oldřich; Holub, Jan
2015-02-01
This paper deals with an analysis of the relation between the codec that is used, the QoS method, and the final voice transmission quality. The Cisco 2811 router is used for adjusting QoS. VoIP client Linphone is used for adjusting the codec. The criterion for transmission quality is the MOS parameter investigated with the ITU-T P.862 PESQ and P.863 POLQA algorithms.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
Climate Impacts on Extreme Energy Consumption of Different Types of Buildings
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings. PMID:25923205
Climate impacts on extreme energy consumption of different types of buildings.
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings.
Park, Yong Seo; Polovka, Martin; Ham, Kyung-Sik Ham; Park, Yang-Kyun; Vearasilp, Suchada; Namieśnik, Jacek; Toledo, Fernando; Arancibia-Avila, Patricia; Gorinstein, Shela
2016-09-01
Organic, semiorganic, and conventional "Hayward" kiwifruits, treated with ethylene for 24 h and stored during 10 days, were assessed by UV spectrometry, fluorometry, and chemometrical analysis for changes in selected characteristics of quality (firmness, dry matter and soluble solid contents, pH, and acidity) and bioactivity (concentration of polyphenols via Folin-Ciocalteu and p-hydroxybenzoic acid assays). All of the monitored qualitative parameters and characteristics related to bioactivity were affected either by cultivation practices or by ethylene treatment and storage. Results obtained, supported by statistical evaluation (Friedman two-way ANOVA) and chemometric analysis, clearly proved that the most significant impact on the majority of the evaluated parameters of quality and bioactivity of "Hayward" kiwifruit had the ethylene treatment followed by the cultivation practices and the postharvest storage. Total concentration of polyphenols expressed via p-hydroxybenzoic acid assay exhibited the most significant sensitivity to all three evaluated parameters, reaching a 16.5% increase for fresh organic compared to a conventional control sample. As a result of postharvest storage coupled with ethylene treatment, the difference increased to 26.3%. Three-dimensional fluorescence showed differences in the position of the main peaks and their fluorescence intensity for conventional, semiorganic, and organic kiwifruits in comparison with ethylene nontreated samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
Downstream processing from hot-melt extrusion towards tablets: A quality by design approach.
Grymonpré, W; Bostijn, N; Herck, S Van; Verstraete, G; Vanhoorne, V; Nuhn, L; Rombouts, P; Beer, T De; Remon, J P; Vervaet, C
2017-10-05
Since the concept of continuous processing is gaining momentum in pharmaceutical manufacturing, a thorough understanding on how process and formulation parameters can impact the critical quality attributes (CQA) of the end product is more than ever required. This study was designed to screen the influence of process parameters and drug load during HME on both extrudate properties and tableting behaviour of an amorphous solid dispersion formulation using a quality-by-design (QbD) approach. A full factorial experimental design with 19 experiments was used to evaluate the effect of several process variables (barrel temperature: 160-200°C, screw speed: 50-200rpm, throughput: 0.2-0.5kg/h) and drug load (0-20%) as formulation parameter on the hot-melt extrusion (HME) process, extrudate and tablet quality of Soluplus ® -Celecoxib amorphous solid dispersions. A prominent impact of the formulation parameter on the CQA of the extrudates (i.e. solid state properties, moisture content, particle size distribution) and tablets (i.e. tabletability, compactibility, fragmentary behaviour, elastic recovery) was discovered. The resistance of the polymer matrix to thermo-mechanical stress during HME was confirmed throughout the experimental design space. In addition, the suitability of Raman spectroscopy as verification method for the active pharmaceutical ingredient (API) concentration in solid dispersions was evaluated. Incorporation of the Raman spectroscopy data in a PLS model enabled API quantification in the extrudate powders with none of the DOE-experiments resulting in extrudates with a CEL content deviating>3% of the label claim. This research paper emphasized that HME is a robust process throughout the experimental design space for obtaining amorphous glassy solutions and for tabletting of such formulations since only minimal impact of the process parameters was detected on the extrudate and tablet properties. However, the quality of extrudates and tablets can be optimized by adjusting specific formulations parameters (e.g. drug load). Copyright © 2017 Elsevier B.V. All rights reserved.
Gąsior, Jakub S.; Sacha, Jerzy; Jeleń, Piotr J.; Zieliński, Jakub; Przybylski, Jacek
2016-01-01
Background: Since heart rate variability (HRV) is associated with average heart rate (HR) and respiratory rate (RespRate), alterations in these parameters may impose changes in HRV. Hence the repeatability of HRV measurements may be affected by differences in HR and RespRate. The study aimed to evaluate HRV repeatability and its association with changes in HR and RespRate. Methods: Forty healthy volunteers underwent two ECG examinations 7 days apart. Standard HRV indices were calculated from 5-min ECG recordings. The ECG-derived respiration signal was estimated to assess RespRate. To investigate HR impact on HRV, HRV parameters were corrected for prevailing HR. Results: Differences in HRV parameters between the measurements were associated with the changes in HR and RespRate. However, in multiple regression analysis only HR alteration proved to be independent determinant of the HRV differences—every change in HR by 1 bpm changed HRV values by 16.5% on average. After overall removal of HR impact on HRV, coefficients of variation of the HRV parameters significantly dropped on average by 26.8% (p < 0.001), i.e., by the same extent HRV reproducibility improved. Additionally, the HRV correction for HR decreased association between RespRate and HRV. Conclusions: In stable conditions, HR but not RespRate is the most powerful factor determining HRV reproducibility and even a minimal change of HR may considerably alter HRV. However, the removal of HR impact may significantly improve HRV repeatability. The association between HRV and RespRate seems to be, at least in part, HR dependent. PMID:27588006
Impact of Myopia on Corneal Biomechanics in Glaucoma and Nonglaucoma Patients
Panpruk, Rawiphan; Manassakorn, Anita; Tantisevi, Visanee; Rojanapongpun, Prin; Hurst, Cameron P.; Lin, Shan C.
2017-01-01
Purpose We evaluated the impact of myopia on corneal biomechanical properties in primary open-angle glaucoma (POAG) and nonglaucoma patients, and the effect of modification of glaucoma on myopic eyes. Methods This cross-sectional study included 66 POAG eyes (33 myopia, 33 nonmyopia) and 66 normal eyes (33 myopia, 33 nonmyopia). Seven corneal biomechanical parameters were measured by ultra-high-speed Scheimpflug imaging, including corneal deformation amplitude (CDA), inward/outward corneal applanation length (ICA, OCA), inward/outward corneal velocity (ICV, OCV), radius, and peak distance (PD). Results Mean age (SD) of the 65 male (49%) and 67 female (51%) patients was 59 (9.82) years. Myopia was associated with significantly higher CDA (adjusted effect = 0.104, P = 0.001) and lower OCV (adjusted effect = −0.105, P < 0.001) in the POAG group. Within the nonglaucoma group, myopic eyes had a significantly lower OCV (adjusted effect = −0.086, P < 0.001) and higher CDA (adjusted effect = 0.079, P = 0.001). All parameters except PD suggested that glaucoma modified the effect of myopia on corneal biomechanics. Percentage differences in the adjusted myopic effect between POAG and nonglaucoma patients was 31.65, 27.27, 31.65, 50.00, 22.09, and 60.49 for CDA, ICA, OCA, ICV, OCV, and radius, respectively. Conclusions Myopia had a significant impact on corneal biomechanical properties in the POAG and nonglaucoma groups. The differences in corneal biomechanical parameters suggest that myopia is correlated with significantly lower ocular rigidity. POAG may enhance the effects of myopia on most of these parameters. PMID:28979996
Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats
2014-05-01
In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Selection of the battery pack parameters for an electric vehicle based on performance requirements
NASA Astrophysics Data System (ADS)
Koniak, M.; Czerepicki, A.
2017-06-01
Each type of vehicle has specific power requirements. Some require a rapid charging, other make long distances between charges, but a common feature is the longest battery life time. Additionally, the battery is influenced by factors such as temperature, depth of discharge and the operation current. The article contain the parameters of chemical cells that should be taken into account during the design of the battery for a specific application. This is particularly important because the batteries are not properly matched and can wear prematurely and cause an additional costs. The method of selecting the correct cell type should take previously discussed features and operating characteristics of the vehicle into account. The authors present methods of obtaining such characteristics along with their assessment and examples. Also there has been described an example of the battery parameters selection based on design assumptions of the vehicle and the expected performance characteristics. Selecting proper battery operating parameters is important due to its impact on the economic result of investments in electric vehicles. For example, for some Li-Ion technologies, the earlier worn out of batteries in a fleet of cruise boats or buses having estimated lifetime of 10 years is not acceptable, because this will cause substantial financial losses for the owner of the rolling stock. The presented method of choosing the right cell technology in the selected application, can be the basis for making the decision on future battery technical parameters.
NASA Technical Reports Server (NTRS)
Liu, Kuang C.; Arnold, Steven M.
2011-01-01
It is well known that failure of a material is a locally driven event. In the case of ceramic matrix composites (CMCs), significant variations in the microstructure of the composite exist and their significance on both deformation and life response need to be assessed. Examples of these variations include changes in the fiber tow shape, tow shifting/nesting and voids within and between tows. In the present work, the effects of many of these architectural parameters and material scatter of woven ceramic composite properties at the macroscale (woven RUC) will be studied to assess their sensitivity. The recently developed Multiscale Generalized Method of Cells methodology is used to determine the overall deformation response, proportional elastic limit (first matrix cracking), and failure under tensile loading conditions. The macroscale responses investigated illustrate the effect of architectural and material parameters on a single RUC representing a five harness satin weave fabric. Results shows that the most critical architectural parameter is weave void shape and content with other parameters being less in severity. Variation of the matrix material properties was also studied to illustrate the influence of the material variability on the overall features of the composite stress-strain response.
NASA Astrophysics Data System (ADS)
Cherevko, A. A.; Bord, E. E.; Khe, A. K.; Panarin, V. A.; Orlov, K. J.; Chupakhin, A. P.
2016-06-01
This article considers method of describing the behaviour of hemodynamic parameters near vascular pathologies. We study the influence of arterial aneurysms and arteriovenous malformations on the vascular system. The proposed method involves using generalized model of Van der Pol-Duffing to find out the characteristic behaviour of blood flow parameters. These parameters are blood velocity and pressure in the vessel. The velocity and pressure are obtained during the neurosurgery measurements. It is noted that substituting velocity on the right side of the equation gives good pressure approximation. Thus, the model reproduces clinical data well enough. In regard to the right side of the equation, it means external impact on the system. The harmonic functions with various frequencies and amplitudes are substituted on the right side of the equation to investigate its properties. Besides, variation of the right side parameters provides additional information about pressure. Non-linear analogue of Nyquist diagrams is used to find out how the properties of solution depend on the parameter values. We have analysed 60 cases with aneurysms and 14 cases with arteriovenous malformations. It is shown that the diagrams are divided into classes. Also, the classes are replaced by another one in the definite order with increasing of the right side amplitude.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
Chiu, Sam L H; Lo, Irene M C
2016-12-01
In this paper, factors that affect biogas production in the anaerobic digestion (AD) and anaerobic co-digestion (coAD) processes of food waste are reviewed with the aim to improve biogas production performance. These factors include the composition of substrates in food waste coAD as well as pre-treatment methods and anaerobic reactor system designs in both food waste AD and coAD. Due to the characteristics of the substrates used, the biogas production performance varies as different effects are exhibited on nutrient balance, inhibitory substance dilution, and trace metal element supplement. Various types of pre-treatment methods such as mechanical, chemical, thermal, and biological methods are discussed to improve the rate-limiting hydrolytic step in the digestion processes. The operation parameters of a reactor system are also reviewed with consideration of the characteristics of the substrates. Since the environmental awareness and concerns for waste management systems have been increasing, this paper also addresses possible environmental impacts of AD and coAD in food waste treatment and recommends feasible methods to reduce the impacts. In addition, uncertainties in the life cycle assessment (LCA) studies are also discussed.
Monaci, Linda; Brohée, Marcel; Tregoat, Virginie; van Hengel, Arjon
2011-07-15
Milk allergens are common allergens occurring in foods, therefore raising concern in allergic consumers. Enzyme-linked immunosorbent assay (ELISA) is, to date, the method of choice for the detection of food allergens by the food industry although, the performance of ELISA might be compromised when severe food processing techniques are applied to allergen-containing foods. In this paper we investigated the influence of baking time on the detection of milk allergens by using commercial ELISA kits. Baked cookies were chosen as a model food system and experiments were set up to study the impact of spiking a matrix food either before, or after the baking process. Results revealed clear analytical differences between both spiking methods, which stress the importance of choosing appropriate spiking methodologies for method validation purposes. Finally, since the narrow dynamic range of quantification of ELISA implies that dilution of samples is required, the impact of sample dilution on the quantitative results was investigated. All parameters investigated were shown to impact milk allergen detection by means of ELISA. Copyright © 2011 Elsevier Ltd. All rights reserved.
The School Contextual Effect of Sexual Debut on Sexual Risk-Taking: A Joint Parameter Approach
ERIC Educational Resources Information Center
Cai, Tianji; Zhou, Yisu; Niño, Michael D.; Driver, Nichola
2018-01-01
Background: Previous research has identified individual and school-level characteristics that are associated with sexual risk-taking, but the impact of school-level mechanisms on sexual risk-taking is not well understood. We examine the aggregated effects that early sex at the school level have on risky sexual behaviors. Methods: We use 3 waves of…
A Study of the Effect of the Front-End Styling of Sport Utility Vehicles on Pedestrian Head Injuries
Qin, Qin; Chen, Zheng; Bai, Zhonghao; Cao, Libo
2018-01-01
Background The number of sport utility vehicles (SUVs) on China market is continuously increasing. It is necessary to investigate the relationships between the front-end styling features of SUVs and head injuries at the styling design stage for improving the pedestrian protection performance and product development efficiency. Methods Styling feature parameters were extracted from the SUV side contour line. And simplified finite element models were established based on the 78 SUV side contour lines. Pedestrian headform impact simulations were performed and validated. The head injury criterion of 15 ms (HIC15) at four wrap-around distances was obtained. A multiple linear regression analysis method was employed to describe the relationships between the styling feature parameters and the HIC15 at each impact point. Results The relationship between the selected styling features and the HIC15 showed reasonable correlations, and the regression models and the selected independent variables showed statistical significance. Conclusions The regression equations obtained by multiple linear regression can be used to assess the performance of SUV styling in protecting pedestrians' heads and provide styling designers with technical guidance regarding their artistic creations.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders
2017-06-22
In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.
Slope stability analysis using limit equilibrium method in nonlinear criterion.
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.
Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838
Hey, Jody; Nielsen, Rasmus
2004-01-01
The genetic study of diverging, closely related populations is required for basic questions on demography and speciation, as well as for biodiversity and conservation research. However, it is often unclear whether divergence is due simply to separation or whether populations have also experienced gene flow. These questions can be addressed with a full model of population separation with gene flow, by applying a Markov chain Monte Carlo method for estimating the posterior probability distribution of model parameters. We have generalized this method and made it applicable to data from multiple unlinked loci. These loci can vary in their modes of inheritance, and inheritance scalars can be implemented either as constants or as parameters to be estimated. By treating inheritance scalars as parameters it is also possible to address variation among loci in the impact via linkage of recurrent selective sweeps or background selection. These methods are applied to a large multilocus data set from Drosophila pseudoobscura and D. persimilis. The species are estimated to have diverged approximately 500,000 years ago. Several loci have nonzero estimates of gene flow since the initial separation of the species, with considerable variation in gene flow estimates among loci, in both directions between the species. PMID:15238526
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cooper, Elizabeth; Dance, Sarah; Garcia-Pintado, Javier; Nichols, Nancy; Smith, Polly
2017-04-01
Timely and accurate inundation forecasting provides vital information about the behaviour of fluvial flood water, enabling mitigating actions to be taken by residents and emergency services. Data assimilation is a powerful mathematical technique for combining forecasts from hydrodynamic models with observations to produce a more accurate forecast. We discuss the effect of both domain size and channel friction parameter estimation on observation impact in data assimilation for inundation forecasting. Numerical shallow water simulations are carried out in a simple, idealized river channel topography. Data assimilation is performed using an Ensemble Transform Kalman Filter (ETKF) and synthetic observations of water depth in identical twin experiments. We show that reinitialising the numerical inundation model with corrected water levels after an assimilation can cause an initialisation shock if a hydrostatic assumption is made, leading to significant degradation of the forecast for several hours immediately following an assimilation. We demonstrate an effective and novel method for dealing with this. We find that using data assimilation to combine observations of water depth with forecasts from a hydrodynamic model corrects the forecast very effectively at time of the observations. In agreement with other authors we find that the corrected forecast then moves quickly back to the open loop forecast which does not take the observations into account. Our investigations show that the time taken for the forecast to decay back to the open loop case depends on the length of the domain of interest when only water levels are corrected. This is because the assimilation corrects water depths in all parts of the domain, even when observations are only available in one area. Error growth in the forecast step then starts at the upstream part of the domain and propagates downstream. The impact of the observations is therefore longer-lived in a longer domain. We have found that the upstream-downstream pattern of error growth can be due to incorrect friction parameter specification, rather than errors in inflow as shown elsewhere. Our results show that joint state-parameter estimation can recover accurate values for the parameter controlling channel friction processes in the model, even when observations of water level are only available on part of the flood plain. Correcting water levels and the channel friction parameter together leads to a large improvement in the forecast water levels at all simulation times. The impact of the observations is therefore much greater when the channel friction parameter is corrected along with water levels. We find that domain length effects disappear for joint state-parameter estimation.
Influence parameters of impact grinding mills
NASA Technical Reports Server (NTRS)
Hoeffl, K.; Husemann, K.; Goldacker, H.
1984-01-01
Significant parameters for impact grinding mills were investigated. Final particle size was used to evaluate grinding results. Adjustment of the parameters toward increased charge load results in improved efficiency; however, it was not possible to define a single, unified set to optimum grinding conditions.
Kim, Yong Sun; Choi, Hyeong Ho; Cho, Young Nam; Park, Yong Jae; Lee, Jong B; Yang, King H; King, Albert I
2005-11-01
Although biomechanical studies on the knee-thigh-hip (KTH) complex have been extensive, interactions between the KTH and various vehicular interior design parameters in frontal automotive crashes for newer models have not been reported in the open literature to the best of our knowledge. A 3D finite element (FE) model of a 50(th) percentile male KTH complex, which includes explicit representations of the iliac wing, acetabulum, pubic rami, sacrum, articular cartilage, femoral head, femoral neck, femoral condyles, patella, and patella tendon, has been developed to simulate injuries such as fracture of the patella, femoral neck, acetabulum, and pubic rami of the KTH complex. Model results compared favorably against regional component test data including a three-point bending test of the femur, axial loading of the isolated knee-patella, axial loading of the KTH complex, axial loading of the femoral head, and lateral loading of the isolated pelvis. The model was further integrated into a Wayne State University upper torso model and validated against data obtained from whole body sled tests. The model was validated against these experimental data over a range of impact speeds, impactor masses and boundary conditions. Using Design Of Experiment (DOE) methods based on Taguchi's approach and the developed FE model of the whole body, including the KTH complex, eight vehicular interior design parameters, namely the load limiter force, seat belt elongation, pretensioner inlet amount, knee-knee bolster distance, knee bolster angle, knee bolster stiffness, toe board angle and impact speed, each with either two or three design levels, were simulated to predict their respective effects on the potential of KTH injury in frontal impacts. Simulation results proposed best design levels for vehicular interior design parameters to reduce the injury potential of the KTH complex due to frontal automotive crashes. This study is limited by the fact that prediction of bony fracture was based on an element elimination method available in the LS-DYNA code. No validation study was conducted to determine if this method is suitable when simulating fractures of biological tissues. More work is still needed to further validate the FE model of the KTH complex to increase its reliability in the assessment of various impact loading conditions associated with vehicular crash scenarios.
Sensitivity study and parameter optimization of OCD tool for 14nm finFET process
NASA Astrophysics Data System (ADS)
Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping
2016-03-01
Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W.
2013-01-01
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully. PMID:24253191
Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W
2013-11-18
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.
Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh
2017-01-01
In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Bassuoni, M M
2014-03-01
The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.
The Interplay between QSAR/QSPR Studies and Partial Order Ranking and Formal Concept Analyses
Carlsen, Lars
2009-01-01
The often observed scarcity of physical-chemical and well as toxicological data hampers the assessment of potentially hazardous chemicals released to the environment. In such cases Quantitative Structure-Activity Relationships/Quantitative Structure-Property Relationships (QSAR/QSPR) constitute an obvious alternative for rapidly, effectively and inexpensively generatng missing experimental values. However, typically further treatment of the data appears necessary, e.g., to elucidate the possible relations between the single compounds as well as implications and associations between the various parameters used for the combined characterization of the compounds under investigation. In the present paper the application of QSAR/QSPR in combination with Partial Order Ranking (POR) methodologies will be reviewed and new aspects using Formal Concept Analysis (FCA) will be introduced. Where POR constitutes an attractive method for, e.g., prioritizing a series of chemical substances based on a simultaneous inclusion of a range of parameters, FCA gives important information on the implications associations between the parameters. The combined approach thus constitutes an attractive method to a preliminary assessment of the impact on environmental and human health by primary pollutants or possibly by a primary pollutant well as a possible suite of transformation subsequent products that may be both persistent in and bioaccumulating and toxic. The present review focus on the environmental – and human health impact by residuals of the rocket fuel 1,1-dimethylhydrazine (heptyl) and its transformation products as an illustrative example. PMID:19468330
2012-01-01
Background Documentation of posture measurement costs is rare and cost models that do exist are generally naïve. This paper provides a comprehensive cost model for biomechanical exposure assessment in occupational studies, documents the monetary costs of three exposure assessment methods for different stakeholders in data collection, and uses simulations to evaluate the relative importance of cost components. Methods Trunk and shoulder posture variables were assessed for 27 aircraft baggage handlers for 3 full shifts each using three methods typical to ergonomic studies: self-report via questionnaire, observation via video film, and full-shift inclinometer registration. The cost model accounted for expenses related to meetings to plan the study, administration, recruitment, equipment, training of data collectors, travel, and onsite data collection. Sensitivity analyses were conducted using simulated study parameters and cost components to investigate the impact on total study cost. Results Inclinometry was the most expensive method (with a total study cost of € 66,657), followed by observation (€ 55,369) and then self report (€ 36,865). The majority of costs (90%) were borne by researchers. Study design parameters such as sample size, measurement scheduling and spacing, concurrent measurements, location and travel, and equipment acquisition were shown to have wide-ranging impacts on costs. Conclusions This study provided a general cost modeling approach that can facilitate decision making and planning of data collection in future studies, as well as investigation into cost efficiency and cost efficient study design. Empirical cost data from a large field study demonstrated the usefulness of the proposed models. PMID:22738341
Effect of Impaction Sequence on Osteochondral Graft Damage: The Role of Repeated and Varying Loads
Kang, Richard W.; Friel, Nicole A.; Williams, James M.; Cole, Brian J.; Wimmer, Markus A.
2013-01-01
Background Osteochondral autografts and allografts require mechanical force for proper graft placement into the defect site; however, impaction compromises the tissue. This study aimed to determine the effect of impaction force and number of hits to seat the graft on cartilage integrity. Hypothesis Under constant impulse conditions, higher impaction load magnitudes are more detrimental to cell viability, matrix integrity and collagen network organization and will result in proteoglycan loss and nitric oxide release. Study Design Controlled laboratory study Methods Osteochondral explants, harvested from fresh bovine trochleas, were exposed to a series of consistent impact loads delivered by a pneumatically driven device. Each plug received the same overall impulse of 7 Ns, reflecting the mean of 23 clinically inserted plugs. Impaction loads of 37.5N, 75N, 150N, and 300N were matched with 74, 37, 21, and 11 hits respectively. Following impaction, the plugs were harvested and cartilage was analyzed for cell viability, histology by safranin-o and picosirius red, and release of sulfated glycosaminoglycans and nitric oxide. Data were compared with non-impacted control. Results Impacted plugs had significantly lower cell viability than non-impacted plugs. A dose response relationship in loss of cell viability with respect to load magnitude was seen immediately and after 4 days but lost after 8 days. Histologic analysis revealed intact cartilage surface in all samples (loaded or control), with loaded samples showing alterations in birefringence. While the sulfated GAG release was similar across varying impaction loads, release of nitric oxide increased with increasing impaction magnitudes and time. Conclusions Impaction loading parameters have a direct effect on the time course of the viability of the cartilage in the graft tissue. Clinical Relevance Optimal loading parameters for surgical impaction of osteochondral grafts are those with lower load magnitudes and a greater number of hits to ensure proper fit. PMID:19915099
Size-Related Changes in Foot Impact Mechanics in Hoofed Mammals
Warner, Sharon Elaine; Pickering, Phillip; Panagiotopoulou, Olga; Pfau, Thilo; Ren, Lei; Hutchinson, John Richard
2013-01-01
Foot-ground impact is mechanically challenging for all animals, but how do large animals mitigate increased mass during foot impact? We hypothesized that impact force amplitude scales according to isometry in animals of increasing size through allometric scaling of related impact parameters. To test this, we measured limb kinetics and kinematics in 11 species of hoofed mammals ranging from 18–3157 kg body mass. We found impact force amplitude to be maintained proportional to size in hoofed mammals, but that other features of foot impact exhibit differential scaling patterns depending on the limb; forelimb parameters typically exhibit higher intercepts with lower scaling exponents than hind limb parameters. Our explorations of the size-related consequences of foot impact advance understanding of how body size influences limb morphology and function, foot design and locomotor behaviour. PMID:23382967
Jiang, Zhinong; Mao, Zhiwei; Wang, Zijia; Zhang, Jinjie
2017-12-15
Internal combustion engines (ICEs) are widely used in many important fields. The valve train clearance of an ICE usually exceeds the normal value due to wear or faulty adjustment. This work aims at diagnosing the valve clearance fault based on the vibration signals measured on the engine cylinder heads. The non-stationarity of the ICE operating condition makes it difficult to obtain the nominal baseline, which is always an awkward problem for fault diagnosis. This paper overcomes the problem by inspecting the timing of valve closing impacts, of which the referenced baseline can be obtained by referencing design parameters rather than extraction during healthy conditions. To accurately detect the timing of valve closing impact from vibration signals, we carry out a new method to detect and extract the commencement of the impacts. The results of experiments conducted on a twelve-cylinder ICE test rig show that the approach is capable of extracting the commencement of valve closing impact accurately and using only one feature can give a superior monitoring of valve clearance. With the help of this technique, the valve clearance fault becomes detectable even without the comparison to the baseline, and the changing trend of the clearance could be trackable.
Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li
2016-01-01
Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211
The impact of obesity in the kinematic parameters of gait in young women
da Silva-Hamu, Tânia Cristina Dias; Formiga, Cibelle Kayenne Martins Roberto; Gervásio, Flávia Martins; Ribeiro, Darlan Martins; Christofoletti, Gustavo; de França Barros, Jônatas
2013-01-01
Background The prevalence of obesity is increasing in the population, particularly in women. Obesity has an impact on the musculoskeletal system, leading to knee and ankle overexertion, difficulty with balance, and functional disability. The aim of this study was to identify changes in kinematic parameters of gait in obese young women. Methods A case-control study with 24 obese women (mean age 35.20 ± 9.9 years and mean body mass index of 31.85 ± 2.94 kg/m2) and 24 eutrophic women (mean age of 36.33 ± 11.14 and mean body mass index of 21.82 ± 1.58 kg/m2). The gait of women was evaluated by the system Vicon Motus® 9.2. The linear parameters of speed, cadence, right and left step, and stride lengths were studied, as well as the angular parameters of knee and ankle. Results There was a decrease in linear gait parameters (P < 0.001), speed, cadence, right and left step, and stride lengths. In regard to the angular parameters of the knee and ankle, there were also differences between the analyses (P < 0.001). At the knee joint, obese women have delayed onset of the second wave of flexion, exacerbating such movement in order to compensate. In regard to the ankle, both groups showed curves of normal plantar flexion and dorsiflexion, but there was a delay in the path graph in the ankle of obese women indicating a reduced range of motion and possible over-exertion of the pretibial muscles and soleus muscles simultaneously. Conclusion The results of this study revealed that obesity is a factor that negatively influences the kinematic parameters of gait of young women. PMID:23837005
Wang, Ling; Xian, Jiechen; Hong, Yanlong; Lin, Xiao; Feng, Yi
2012-05-01
To quantify the physical characteristics of sticks of traditional Chinese medicine (TCM) honeyed pills prepared by the plastic molded method and the correlation of adhesiveness and plasticity-related parameters of sticks and quality of pills, in order to find major parameters and the appropriate range impacting pill quality. Sticks were detected by texture analyzer for their physical characteristic parameters such as hardness and compression action, and pills were observed by visual evaluation for their quality. The correlation of both data was determined by the stepwise discriminant analysis. Stick physical characteristic parameter l(CD) can exactly depict the adhesiveness, with the discriminant equation of Y0 - Y1 = 6.415 - 41.594l(CD). When Y0 < Y1, pills were scattered well; when Y0 > Y1, pills were adhesive with each other. Pills' physical characteristic parameters l(CD) and l(AC), Ar, Tr can exactly depict smoothness of pills, with the discriminant equation of Z0 - Z1 = -195.318 + 78.79l(AC) - 3 258. 982Ar + 3437.935Tr. When Z0 < Z1, pills were smooth on surface. When Z0 > Z1, pills were rough on surface. The stepwise discriminant analysis is made to show the obvious correlation between key physical characteristic parameters l(CD) and l(AC), Ar, Tr of sticks and appearance quality of pills, defining the molding process for preparing pills by the plastic molded and qualifying ranges of key physical characteristic parameters characterizing intermediate sticks, in order to provide theoretical basis for prescription screening and technical parameter adjustment for pills.
NASA Astrophysics Data System (ADS)
Englman, R.
2016-08-01
The recent phase shift data of Takada et al. (Phys. Rev. Lett. 113 (2014) 126601) for a two level system are reconstructed from their current intensity curves by the method of Hilbert transform, for which the underlying Physics is the principle of causality. An introductory algebraic model illustrates pedagogically the working of the method and leads to newly derived relationships involving phenomenological parameters, in particular for the sign of the phase slope between the resonance peaks. While the parametrization of the experimental current intensity data in terms of a few model parameters shows only a qualitative agreement for the phase shift, due to the strong impact of small, detailed variations in the experimental intensity curve on the phase behavior, the numerical Hilbert transform yields a satisfactory reproduction of the phase.
NASA Astrophysics Data System (ADS)
Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.
2014-05-01
Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2011-07-01
The method of NEO's impact orbits search based on two target functions product minimization is presented. These functions are: a square of asteroid-Earth distance at the moment of close approach and a sum of squares of angular residuals. Besides, the method includes a minimization of asteroid-Earth distance's square in function of time alone when initial motion parameters are fixed. Both minimizations are carrying out in turn each by another. The testing of method has been made on the problem of Apophis's impact orbit search. The results of the testing have depicted an effectivity of presented method in searching of impact orbits for the Apophis's Earth encounters in 2036 and 2037.
NASA Astrophysics Data System (ADS)
Hina, A.
2017-12-01
Although Thar coal is recognized to be one of the most abundant fossil fuel that could meet the need to combat energy crisis of Pakistan, but there still remains a challenge to tackle the associated environmental and socio-ecological changes and its linkage to the provision of ecosystem services of the region. The study highlights the importance of considering Ecosystem service assessment to be undertaken in all strategic Environmental and Social Assessments of Thar coal field projects. The three-step approach has been formulated to link the project impacts to the provision of important ecosystem services; 1) Identification of impact indicators and parameters by analyzing the environmental and social impacts of surface mining in Thar Coal field through field investigation, literature review and stakeholder consultations; 2) Ranking of parameters and criteria alternatives using Multi-criteria Decision Analysis(MCDA) tool: (AHP method); 3) Using ranked parameters as a proxy to prioritize important ecosystem services of the region; The ecosystem services that were prioritized because of both high significance of project impact and high project dependence are highlighted as: Water is a key ecosystem service to be addressed and valued due to its high dependency in the area for livestock, human wellbeing, agriculture and other purposes. Crop production related to agricultural services, in association with supply services such as soil quality, fertility, and nutrient recycling and water retention need to be valued. Cultural services affected in terms of land use change and resettlement and rehabilitation factors are recommended to be addressed. The results of the analysis outline a framework of identifying these linkages as key constraints to foster the emergence of green growth and development in Pakistan. The practicality of implementing these assessments requires policy instruments and strategies to support human well-being and social inclusion while minimizing environmental degradation and loss of ecosystem services. Keywords Ecosystem service assessment; Environmental and Social Impact Assessment; coal mining; Thar Coal Field; Sustainable development
Full-Scale Passive Earth Entry Vehicle Landing Tests: Methods and Measurements
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Kellas, Sotiris
2018-01-01
During the summer of 2016, a series of drop tests were conducted on two passive earth entry vehicle (EEV) test articles at the Utah Test and Training Range (UTTR). The tests were conducted to evaluate the structural integrity of a realistic EEV vehicle under anticipated landing loads. The test vehicles were lifted to an altitude of approximately 400m via a helicopter and released via release hook into a predesignated 61 m landing zone. Onboard accelerometers were capable of measuring vehicle free flight and impact loads. High-speed cameras on the ground tracked the free-falling vehicles and data was used to calculate critical impact parameters during the final seconds of flight. Additional sets of high definition and ultra-high definition cameras were able to supplement the high-speed data by capturing the release and free flight of the test articles. Three tests were successfully completed and showed that the passive vehicle design was able to withstand the impact loads from nominal and off-nominal impacts at landing velocities of approximately 29 m/s. Two out of three test resulted in off-nominal impacts due to a combination of high winds at altitude and the method used to suspend the vehicle from the helicopter. Both the video and acceleration data captured is examined and discussed. Finally, recommendations for improved release and instrumentation methods are presented.
A comparison of two methods for expert elicitation in health technology assessments.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2016-07-26
When data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used. Twenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods - the histogram and hybrid elicitation methods - presented in a random order. Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis. Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert's opinion were also recorded. Six experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600-£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74-86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method. Inclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.
Impact parameter sensitive study of inner-shell atomic processes in the experimental storage ring
NASA Astrophysics Data System (ADS)
Gumberidze, A.; Kozhuharov, C.; Zhang, R. T.; Trotsenko, S.; Kozhedub, Y. S.; DuBois, R. D.; Beyer, H. F.; Blumenhagen, K.-H.; Brandau, C.; Bräuning-Demian, A.; Chen, W.; Forstner, O.; Gao, B.; Gassner, T.; Grisenti, R. E.; Hagmann, S.; Hillenbrand, P.-M.; Indelicato, P.; Kumar, A.; Lestinsky, M.; Litvinov, Yu. A.; Petridis, N.; Schury, D.; Spillmann, U.; Trageser, C.; Trassinelli, M.; Tu, X.; Stöhlker, Th.
2017-10-01
In this work, we present a pilot experiment in the experimental storage ring (ESR) at GSI devoted to impact parameter sensitive studies of inner shell atomic processes for low-energy (heavy-) ion-atom collisions. The experiment was performed with bare and He-like xenon ions (Xe54+, Xe52+) colliding with neutral xenon gas atoms, resulting in a symmetric collision system. This choice of the projectile charge states was made in order to compare the effect of a filled K-shell with the empty one. The projectile and target X-rays have been measured at different observation angles for all impact parameters as well as for the impact parameter range of ∼35-70 fm.
Video analysis of the biomechanics of a bicycle accident resulting in significant facial fractures.
Syed, Shameer H; Willing, Ryan; Jenkyn, Thomas R; Yazdani, Arjang
2013-11-01
This study aimed to use video analysis techniques to determine the velocity, impact force, angle of impact, and impulse to fracture involved in a video-recorded bicycle accident resulting in facial fractures. Computed tomographic images of the resulting facial injury are presented for correlation with data and calculations. To our knowledge, such an analysis of an actual recorded trauma has not been reported in the literature. A video recording of the accident was split into frames and analyzed using an image editing program. Measurements of velocity and angle of impact were obtained from this analysis, and the force of impact and impulse were calculated using the inverse dynamic method with connected rigid body segments. These results were then correlated with the actual fracture pattern found on computed tomographic imaging of the subject's face. There was an impact velocity of 6.25 m/s, impact angles of 14 and 6.3 degrees of neck extension and axial rotation, respectively, an impact force of 1910.4 N, and an impulse to fracture of 47.8 Ns. These physical parameters resulted in clinically significant bilateral mid-facial Le Fort II and III pattern fractures. These data confer further understanding of the biomechanics of bicycle-related accidents by correlating an actual clinical outcome with the kinematic and dynamic parameters involved in the accident itself and yielding a concrete evidence of the velocity, force, and impulse necessary to cause clinically significant facial trauma. These findings can aid in the design of protective equipment for bicycle riders to help avoid this type of injury.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barron, Robert W.; McJeon, Haewon C.
2015-05-01
This paper considers the effect of several key parameters of low carbon energy technologies on the cost of abatement. A methodology for determining the minimum level of performance required for a parameter to have a statistically significant impact on CO2 abatement cost is developed and used to evaluate the impact of eight key parameters of low carbon energy supply technologies on the cost of CO2 abatement. The capital cost of nuclear technology is found to have the greatest impact of the parameters studied. The cost of biomass and CCS technologies also have impacts, while their efficiencies have little, if any.more » Sensitivity analysis of the results with respect to population, GDP, and CO2 emission constraint show that the minimum performance level and impact of nuclear technologies is consistent across the socioeconomic scenarios studied, while the other technology parameters show different performance under higher population, lower GDP scenarios. Solar technology was found to have a small impact, and then only at very low costs. These results indicate that the cost of nuclear is the single most important driver of abatement cost, and that trading efficiency for cost may make biomass and CCS technologies more competitive.« less
Han, Xiao-Jing; Duan, Si-Bo; Li, Zhao-Liang
2017-02-20
An analysis of the atmospheric impact on ground brightness temperature (Tg) is performed for numerous land surface types at commonly-used frequencies (i.e., 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz, 23.8 GHz, 36.5 GHz and 89.0 GHz). The results indicate that the atmosphere has a negligible impact on Tg at 1.4 GHz for land surfaces with emissivities greater than 0.7, at 6.93 GHz for land surfaces with emissivities greater than 0.8, and at 10.65 GHz for land surfaces with emissivities greater than 0.9 if a root mean square error (RMSE) less than 1 K is desired. To remove the atmospheric effect on Tg, a generalized atmospheric correction method is proposed by parameterizing the atmospheric transmittance τ and upwelling atmospheric brightness temperature Tba↑. Better accuracies with Tg RMSEs less than 1 K are achieved at 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz and 36.5 GHz, and worse accuracies with RMSEs of 1.34 K and 4.35 K are obtained at 23.8 GHz and 89.0 GHz, respectively. Additionally, a simplified atmospheric correction method is developed when lacking sufficient input data to perform the generalized atmospheric correction method, and an emissivity-based atmospheric correction method is presented when the emissivity is known. Consequently, an appropriate atmospheric correction method can be selected based on the available data, frequency and required accuracy. Furthermore, this study provides a method to estimate τ and Tba↑ of different frequencies using the atmospheric parameters (total water vapor content in observation direction Lwv, total cloud liquid water content Lclw and mean temperature of cloud Tclw), which is important for simultaneously determining the land surface parameters using multi-frequency passive microwave satellite data.
NASA Astrophysics Data System (ADS)
Cretcher, C. K.
1980-11-01
The various types of solar domestic hot water systems are discussed including their advantages and disadvantages. The problems that occur in hydronic solar heating systems are reviewed with emphasis on domestic hot water applicatons. System problems in retrofitting of residential buildings are also discussed including structural and space constraints for various components and subsystems. System design parameters include various collector sizing methods, collector orientation, storage capacity and heat loss from pipes and tanks. The installation costs are broken down by components and subsystems. The approach used for utility economic impact analysis is reviewed. The simulation is described, and the results of the economic impact analysis are given. A summary assessment is included.
Measurement Techniques for Hypervelocity Impact Test Fragments
NASA Technical Reports Server (NTRS)
Hill, Nicole E.
2008-01-01
The ability to classify the size and shape of individual orbital debris fragments provides a better understanding of the orbital debris environment as a whole. The characterization of breakup fragmentation debris has gradually evolved from a simplistic, spherical assumption towards that of describing debris in terms of size, material, and shape parameters. One of the goals of the NASA Orbital Debris Program Office is to develop high-accuracy techniques to measure these parameters and apply them to orbital debris observations. Measurement of the physical characteristics of debris resulting from groundbased, hypervelocity impact testing provides insight into the shapes and sizes of debris produced from potential impacts in orbit. Current techniques for measuring these ground-test fragments require determination of dimensions based upon visual judgment. This leads to reduced accuracy and provides little or no repeatability for the measurements. With the common goal of mitigating these error sources, allaying any misunderstandings, and moving forward in fragment shape determination, the NASA Orbital Debris Program Office recently began using a computerized measurement system. The goal of using these new techniques is to improve knowledge of the relation between commonly used dimensions and overall shape. The immediate objective is to scan a single fragment, measure its size and shape properties, and import the fragment into a program that renders a 3D model that adequately demonstrates how the object could appear in orbit. This information would then be used to aid optical methods in orbital debris shape determination. This paper provides a description of the measurement techniques used in this initiative and shows results of this work. The tradeoffs of the computerized methods are discussed, as well as the means of repeatability in the measurements of these fragments. This paper serves as a general description of methods for the measurement and shape analysis of orbital debris.
[Automobile versus pedestrian accidents analysis by fixed-parameters computer simulation].
Mao, Ming-Yuan; Chen, Yi-Jiu; Liu, Ning-Guo; Zou, Dong-Hua; Liu, Jun-Yong; Jin, Xian-Long
2008-04-01
Using computer simulation to analyze the effects of speed, type of automobile and impacted position on crash-course and injuries of pedestrians in automobile vs. pedestrian accidents. Automobiles (bus, minibus, car and truck) and pedestrian models were constructed with multi-body dynamics computing method. The crashes were simulated at different impact speeds (20, 30, 40, 50 and 60 km/h) and different positions (front, lateral and rear of pedestrians). Crash-courses and their biomechanical responses were studied. If the type of automobile and impact position were the same, the crash-courses were similar (impact speed < or = 60 km/h). There were some characteristics in the head acceleration, upper neck axial force and leg axial force. Multi-body dynamics computer simulation of crash can be applied to analyze crash-course and injuries (head, neck and leg) of pedestrians.
Tschauner, Christian; Fürntrath, Frank; Saba, Yasaman; Berghold, Andrea; Radl, Roman
2011-12-01
PURPOSE/BACKGROUND/INTRODUCTION: The aim of this study was to retrospectively evaluate the impact of neonatal sonographic hip screening using Graf's method for the management and outcome of orthopaedic treatment of decentered hip joints with developmental dysplasia of the hip (DDH), using three decades (1978-2007) of clinical information compiled in a medical database. Three representative cohorts of consecutive cases of decentered hip joints were selected according to different search criteria and inclusion and exclusion parameters: (1) cohort 1 (1978-1982; n = 80), without sonographic screening; (2) cohort 2.1 (1994-1996; n = 91), with nationwide established general sonographic screening according to the Graf-method; (3) cohort 2.2 (2003-2005; n = 91), with sonographic screening including referred cases for open reduction from non-screened populations. These three cohorts were compared for the following parameters: age at initial treatment, successful closed reduction, necessary overhead traction, necessary adductor-tenotomy, rate of open reduction, rate of avascular necrosis (AVN) and rate of secondary acetabuloplasty. The age at initial treatment was reduced from 5.5 months in the first cohort to 2 months in the two subsequent two cohorts and the rate of successful closed reduction increased from 88.7 to 98.9 and 95.6%, respectively. There was a statistically significant improvement in six out of seven parameters with sonographic hip screening; only the rate of secondary acetabuloplasty did not improve significantly. Compared to the era before the institution of a sonographic hip screening programme according to the Graf-method in Austria in 1992, ultrasound screening based-treatment of decentered hip joints has become safer, shorter and simpler: "safer" means lower rate of AVN, "shorter" means less treatment time due to earlier onset and "simpler" means that the devices are now less invasive and highly standardized.
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
ERIC Educational Resources Information Center
Golino, Hudson F.; Gomes, Cristiano M. A.
2016-01-01
This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…
Design for inadvertent damage in composite laminates
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.; Chamis, Christos C.
1992-01-01
Simplified predictive methods and models to computationally simulate durability and damage in polymer matrix composite materials/structures are described. The models include (1) progressive fracture, (2) progressively damaged structural behavior, (3) progressive fracture in aggressive environments, (4) stress concentrations, and (5) impact resistance. Several examples are included to illustrate applications of the models and to identify significant parameters and sensitivities. Comparisons with limited experimental data are made.
NASA Technical Reports Server (NTRS)
Green, S.; Cochrane, D. L.; Truhlar, D. G.
1986-01-01
The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.
Exploring the dynamics of balance data — movement variability in terms of drift and diffusion
NASA Astrophysics Data System (ADS)
Gottschall, Julia; Peinke, Joachim; Lippens, Volker; Nagel, Volker
2009-02-01
We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics.
Advancements in remote physiological measurement and applications in human-computer interaction
NASA Astrophysics Data System (ADS)
McDuff, Daniel
2017-04-01
Physiological signals are important for tracking health and emotional states. Imaging photoplethysmography (iPPG) is a set of techniques for remotely recovering cardio-pulmonary signals from video of the human body. Advances in iPPG methods over the past decade combined with the ubiquity of digital cameras presents the possibility for many new, lowcost applications of physiological monitoring. This talk will highlight methods for recovering physiological signals, work characterizing the impact of video parameters and hardware on these measurements, and applications of this technology in human-computer interfaces.
Molecular structures and intramolecular dynamics of pentahalides
NASA Astrophysics Data System (ADS)
Ischenko, A. A.
2017-03-01
This paper reviews advances of modern gas electron diffraction (GED) method combined with high-resolution spectroscopy and quantum chemical calculations in studies of the impact of intramolecular dynamics in free molecules of pentahalides. Some recently developed approaches to the electron diffraction data interpretation, based on direct incorporation of the adiabatic potential energy surface parameters to the diffraction intensity are described. In this way, complementary data of different experimental and computational methods can be directly combined for solving problems of the molecular structure and its dynamics. The possibility to evaluate some important parameters of the adiabatic potential energy surface - barriers to pseudorotation and saddle point of intermediate configuration from diffraction intensities in solving the inverse GED problem is demonstrated on several examples. With increasing accuracy of the electron diffraction intensities and the development of the theoretical background of electron scattering and data interpretation, it has become possible to investigate complex nuclear dynamics in fluxional systems by the GED method. Results of other research groups are also included in the discussion.
Impact erosion model for gravity-dominated planetesimals
NASA Astrophysics Data System (ADS)
Genda, Hidenori; Fujita, Tomoaki; Kobayashi, Hiroshi; Tanaka, Hidekazu; Suetsugu, Ryo; Abe, Yutaka
2017-09-01
Disruptive collisions have been regarded as an important process for planet formation, while non-disruptive, small-scale collisions (hereafter called erosive collisions) have been underestimated or neglected by many studies. However, recent studies have suggested that erosive collisions are also important to the growth of planets, because they are much more frequent than disruptive collisions. Although the thresholds of the specific impact energy for disruptive collisions (QRD*) have been investigated well, there is no reliable model for erosive collisions. In this study, we systematically carried out impact simulations of gravity-dominated planetesimals for a wide range of specific impact energy (QR) from disruptive collisions (QR ∼ QRD*) to erosive ones (QR << QRD*) using the smoothed particle hydrodynamics method. We found that the ejected mass normalized by the total mass (Mej/Mtot) depends on the numerical resolution, the target radius (Rtar) and the impact velocity (vimp), as well as on QR, but that it can be nicely scaled by QRD* for the parameter ranges investigated (Rtar = 30-300 km, vimp = 2-5 km/s). This means that Mej/Mtot depends only on QR/QRD* in these parameter ranges. We confirmed that the collision outcomes for much less erosive collisions (QR < 0.01 QRD*) converge to the results of an impact onto a planar target for various impact angles (θ) and that Mej/Mtot ∝ QR/QRD* holds. For disruptive collisions (QR ∼ QRD*), the curvature of the target has a significant effect on Mej/Mtot. We also examined the angle-averaged value of Mej/Mtot and found that the numerically obtained relation between angle-averaged Mej/Mtot and QR/QRD* is very similar to the cases for θ = 45° impacts. We proposed a new erosion model based on our numerical simulations for future research on planet formation with collisional erosion.
Irons, Trevor P.; Hobza, Christopher M.; Steele, Gregory V.; Abraham, Jared D.; Cannia, James C.; Woodward, Duane D.
2012-01-01
Surface nuclear magnetic resonance, a noninvasive geophysical method, measures a signal directly related to the amount of water in the subsurface. This allows for low-cost quantitative estimates of hydraulic parameters. In practice, however, additional factors influence the signal, complicating interpretation. The U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, evaluated whether hydraulic parameters derived from surface nuclear magnetic resonance data could provide valuable input into groundwater models used for evaluating water-management practices. Two calibration sites in Dawson County, Nebraska, were chosen based on previous detailed hydrogeologic and geophysical investigations. At both sites, surface nuclear magnetic resonance data were collected, and derived parameters were compared with results from four constant-discharge aquifer tests previously conducted at those same sites. Additionally, borehole electromagnetic-induction flowmeter data were analyzed as a less-expensive surrogate for traditional aquifer tests. Building on recent work, a novel surface nuclear magnetic resonance modeling and inversion method was developed that incorporates electrical conductivity and effects due to magnetic-field inhomogeneities, both of which can have a substantial impact on the data. After comparing surface nuclear magnetic resonance inversions at the two calibration sites, the nuclear magnetic-resonance-derived parameters were compared with previously performed aquifer tests in the Central Platte Natural Resources District. This comparison served as a blind test for the developed method. The nuclear magnetic-resonance-derived aquifer parameters were in agreement with results of aquifer tests where the environmental noise allowed data collection and the aquifer test zones overlapped with the surface nuclear magnetic resonance testing. In some cases, the previously performed aquifer tests were not designed fully to characterize the aquifer, and the surface nuclear magnetic resonance was able to provide missing data. In favorable locations, surface nuclear magnetic resonance is able to provide valuable noninvasive information about aquifer parameters and should be a useful tool for groundwater managers in Nebraska.
Xue, Lianqing; Yang, Fan; Yang, Changbing; Chen, Xinfang; Zhang, Luochen; Chi, Yixia; Yang, Guang
2017-08-15
Understanding contributions of climate change and human activities to changes in streamflow is important for sustainable management of water resources in an arid area. This study presents quantitative analysis of climatic and anthropogenic factors to streamflow alteration in the Tarim River Basin (TRB) using the double mass curve method (DMC) and the Budyko methods. The time series (1960~2015) are divided into three periods: the prior impacted period (1960~1972) and the two post impacted periods, 1973~1986 and 1987~2015 with trend analysis. Our results suggest that human activities played a dominant role in deduction in the streamflow in TRB with contribution of 144.6% to 120.68% during the post impacted period I and 228.68% to 140.38% during the post impacted period II. Climatic variables accounted for 20.68%~44.6% of the decrease during the post impacted period I and 40.38% ~128.68% during the post impacted period II. Sensitivity analysis indicates that the streamflow alteration was most sensitive to changes in landscape parameters. The aridity index and all the elasticities showed an obvious increasing trend from the upstream to the downstream in the TRB. Our study suggests that it is important to take effective measures for sustainable development of eco-hydrological and socio-economic systems in the TRB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Pecha, Petr; Šmídl, Václav
2016-11-01
A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Milosevic, Igor; Naunovic, Zorana
2013-10-01
This article presents a process of evaluation and selection of the most favourable location for a sanitary landfill facility from three alternative locations, by applying a multi-criteria decision-making (MCDM) method. An incorrect choice of location for a landfill facility can have a significant negative economic and environmental impact, such as the pollution of air, ground and surface waters. The aim of this article is to present several improvements in the practical process of landfill site selection using the VIKOR MCDM compromise ranking method integrated with a fuzzy analytic hierarchy process approach for determining the evaluation criteria weighing coefficients. The VIKOR method focuses on ranking and selecting from a set of alternatives in the presence of conflicting and non-commensurable (different units) criteria, and on proposing a compromise solution that is closest to the ideal solution. The work shows that valuable site ranking lists can be obtained using the VIKOR method, which is a suitable choice when there is a large number of relevant input parameters.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Spatial Analysis of Traffic and Routing Path Methods for Tsunami Evacuation
NASA Astrophysics Data System (ADS)
Fakhrurrozi, A.; Sari, A. M.
2018-02-01
Tsunami disaster occurred relatively very fast. Thus, it has a very large-scale impact on both non-material and material aspects. Community evacuation caused mass panic, crowds, and traffic congestion. A further research in spatial based modelling, traffic engineering and splitting zone evacuation simulation is very crucial as an effort to reduce higher losses. This topic covers some information from the previous research. Complex parameters include route selection, destination selection, the spontaneous timing of both the departure of the source and the arrival time to destination and other aspects of the result parameter in various methods. The simulation process and its results, traffic modelling, and routing analysis emphasized discussion which is the closest to real conditions in the tsunami evacuation process. The method that we should highlight is Clearance Time Estimate based on Location Priority in which the computation result is superior to others despite many drawbacks. The study is expected to have input to improve and invent a new method that will be a part of decision support systems for disaster risk reduction of tsunamis disaster.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Ghasemi, Fahimeh; Fassihi, Afshin; Pérez-Sánchez, Horacio; Mehri Dehnavi, Alireza
2017-02-05
Thousands of molecules and descriptors are available for a medicinal chemist thanks to the technological advancements in different branches of chemistry. This fact as well as the correlation between them has raised new problems in quantitative structure activity relationship studies. Proper parameter initialization in statistical modeling has merged as another challenge in recent years. Random selection of parameters leads to poor performance of deep neural network (DNN). In this research, deep belief network (DBN) was applied to initialize DNNs. DBN is composed of some stacks of restricted Boltzmann machine, an energy-based method that requires computing log likelihood gradient for all samples. Three different sampling approaches were suggested to solve this gradient. In this respect, the impact of DBN was applied based on the different sampling approaches mentioned above to initialize the DNN architecture in predicting biological activity of all fifteen Kaggle targets that contain more than 70k molecules. The same as other fields of processing research, the outputs of these models demonstrated significant superiority to that of DNN with random parameters. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ammar Khodja, L'Hady
The rehabilitation and strengthening concrete structures in shear using composite materials such as externally bonded (EB) or near surface mounted rebar (NSMR) are well established techniques. However, debonding of these strengthening materials is still present and constitute the principal cause of shear failure of beams strengthened with composite materials. A new method called ETS (Embedded Through Section) was recently developed in order to avoid premature failures due to debonding of composite materials. The objective of this study is to highlight the importance and influence of important parameters on the behavior of CFRP bars anchorages subjected to pullout forces. These parameters are: concrete strength, anchorage length of CFRP bars, hole diameter in concrete, diameter of the bar and CFRP surface type (smooth versus sanded). Understanding the influence of these parameters on the relationship between the pullout force and the slip is paramount. This allows an accurate description of the behavior of all elements that contribute to the resistance of the CFRP bars pullout. A series of 25 specimens were subjected to pullout tests. The impact of these parameters on the pullout performance of CFRP rods is summarized in terms of failure mode, ultimate tensile strength and loading force slip relationship. The results of these investigations show that using the ETS method, failure of the anchors can be avoided by providing adequate anchorage length and concrete strength. The method provides greater confinement and thus leads to a substantial improvement in the performance of anchors. As a result, designers will be able to avoid failures that are due to debonding of anchors using thereby the full capabilities of reinforced beams strengthened in shear with EB FRP. Keywords: ETS method, shear, strengthening, anchor, slip, FRP, NSM.
Asymmetric bubble collapse and jetting in generalized Newtonian fluids
NASA Astrophysics Data System (ADS)
Shukla, Ratnesh K.; Freund, Jonathan B.
2017-11-01
The jetting dynamics of a gas bubble near a rigid wall in a non-Newtonian fluid are investigated using an axisymmetric simulation model. The bubble gas is assumed to be homogeneous, with density and pressure related through a polytropic equation of state. An Eulerian numerical description, based on a sharp interface capturing method for the shear-free bubble-liquid interface and an incompressible Navier-Stokes flow solver for generalized fluids, is developed specifically for this problem. Detailed simulations for a range of rheological parameters in the Carreau model show both the stabilizing and destabilizing non-Newtonian effects on the jet formation and impact. In general, for fixed driving pressure ratio, stand-off distance and reference zero-shear-rate viscosity, shear-thinning and shear-thickening promote and suppress jet formation and impact, respectively. For a sufficiently large high-shear-rate limit viscosity, the jet impact is completely suppressed. Thresholds are also determined for the Carreau power-index and material time constant. The dependence of these threshold rheological parameters on the non-dimensional driving pressure ratio and wall stand-off distance is similarly established. Implications for tissue injury in therapeutic ultrasound will be discussed.
Impact parameter determination in experimental analysis using a neural network
NASA Astrophysics Data System (ADS)
Haddad, F.; Hagel, K.; Li, J.; Mdeiwayeh, N.; Natowitz, J. B.; Wada, R.; Xiao, B.; David, C.; Freslier, M.; Aichelin, J.
1997-03-01
A neural network is used to determine the impact parameter in 40Ca+40Ca reactions. The effect of the detection efficiency as well as the model dependence of the training procedure has been studied carefully. An overall improvement of the impact parameter determination of 25% is obtained using this technique. The analysis of Amphora 40Ca+40Ca data at 35 MeV per nucleon using a neural network shows two well-separated classes of events among the selected ``complete'' events.
Qiang, Zhe; Zhang, Yuanzhong; Groff, Jesse A; Cavicchi, Kevin A; Vogt, Bryan D
2014-08-28
One of the key issues associated with the utilization of block copolymer (BCP) thin films in nanoscience and nanotechnology is control of their alignment and orientation over macroscopic dimensions. We have recently reported a method, solvent vapor annealing with soft shear (SVA-SS), for fabricating unidirectional alignment of cylindrical nanostructures. This method is a simple extension of the common SVA process by adhering a flat, crosslinked poly(dimethylsiloxane) (PDMS) pad to the BCP thin film. The impact of processing parameters, including annealing time, solvent removal rate and the physical properties of the PDMS pad, on the quality of alignment quantified by the Herman's orientational factor (S) is systematically examined for a model system of polystyrene-block-polyisoprene-block-polystyrene (SIS). As annealing time increases, the SIS morphology transitions from isotropic rods to highly aligned cylinders. Decreasing the rate of solvent removal, which impacts the shear rate imposed by the contraction of the PDMS, improves the orientation factor of the cylindrical domains; this suggests the nanostructure alignment is primarily induced by contraction of PDMS during solvent removal. Moreover, the physical properties of the PDMS controlled by the crosslink density impact the orientation factor by tuning its swelling extent during SVA-SS and elastic modulus. Decreasing the PDMS crosslink density increases S; this effect appears to be primarily driven by the changes in the solubility of the SVA-SS solvent in the PDMS. With this understanding of the critical processing parameters, SVA-SS has been successfully applied to align a wide variety of BCPs including polystyrene-block-polybutadiene-block-polystyrene (SBS), polystyrene-block-poly(N,N-dimethyl-n-octadecylammonium p-styrenesulfonate) (PS-b-PSS-DMODA), polystyrene-block-polydimethylsiloxane (PS-b-PDMS) and polystyrene-block-poly(2-vinlypyridine) (PS-b-P2VP). These results suggest that SVA-SS is a generalizable method for the alignment of BCP thin films.
Improving reticle defect disposition via fully automated lithography simulation
NASA Astrophysics Data System (ADS)
Mann, Raunak; Goodman, Eliot; Lao, Keith; Ha, Steven; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan
2016-03-01
Most advanced wafer fabs have embraced complex pattern decoration, which creates numerous challenges during in-fab reticle qualification. These optical proximity correction (OPC) techniques create assist features that tend to be very close in size and shape to the main patterns as seen in Figure 1. A small defect on an assist feature will most likely have little or no impact on the fidelity of the wafer image, whereas the same defect on a main feature could significantly decrease device functionality. In order to properly disposition these defects, reticle inspection technicians need an efficient method that automatically separates main from assist features and predicts the resulting defect impact on the wafer image. Analysis System (ADAS) defect simulation system[1]. Up until now, using ADAS simulation was limited to engineers due to the complexity of the settings that need to be manually entered in order to create an accurate result. A single error in entering one of these values can cause erroneous results, therefore full automation is necessary. In this study, we propose a new method where all needed simulation parameters are automatically loaded into ADAS. This is accomplished in two parts. First we have created a scanner parameter database that is automatically identified from mask product and level names. Second, we automatically determine the appropriate simulation printability threshold by using a new reference image (provided by the inspection tool) that contains a known measured value of the reticle critical dimension (CD). This new method automatically loads the correct scanner conditions, sets the appropriate simulation threshold, and automatically measures the percentage of CD change caused by the defect. This streamlines qualification and reduces the number of reticles being put on hold, waiting for engineer review. We also present data showing the consistency and reliability of the new method, along with the impact on the efficiency of in-fab reticle qualification.
NASA Astrophysics Data System (ADS)
Herath, Imali Kaushalya; Ye, Xuchun; Wang, Jianli; Bouraima, Abdel-Kabirou
2018-02-01
Reference evapotranspiration (ETr) is one of the important parameters in the hydrological cycle. The spatio-temporal variation of ETr and other meteorological parameters that influence ETr were investigated in the Jialing River Basin (JRB), China. The ETr was estimated using the CROPWAT 8.0 computer model based on the Penman-Montieth equation for the period 1964-2014. Mean temperature (MT), relative humidity (RH), sunshine duration (SD), and wind speed (WS) were the main input parameters of CROPWAT while 12 meteorological stations were evaluated. Linear regression and Mann-Kendall methods were applied to study the spatio-temporal trends while the inverse distance weighted (IDW) method was used to identify the spatial distribution of ETr. Stepwise regression and partial correlation methods were used to identify the meteorological variables that most significantly influenced the changes in ETr. The highest annual ETr was found in the northern part of the basin, whereas the lowest rate was recorded in the western part. In the autumn, the highest ETr was recorded in the southeast part of JRB. The annual ETr reflected neither significant increasing nor decreasing trends. Except for the summer, ETr is slightly increasing in other seasons. The MT significantly increased whereas SD and RH were significantly decreased during the 50-year period. Partial correlation and stepwise regression methods found that the impact of meteorological parameters on ETr varies on an annual and seasonal basis while SD, MT, and RH contributed to the changes of annual and seasonal ETr in the JRB.
Errors in the estimation method for the rejection of vibrations in adaptive optics systems
NASA Astrophysics Data System (ADS)
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
Survey of NASA research on crash dynamics
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Carden, H. D.; Hayduk, R. J.
1984-01-01
Ten years of structural crash dynamics research activities conducted on general aviation aircraft by the National Aeronautics and Space Administration (NASA) are described. Thirty-two full-scale crash tests were performed at Langley Research Center, and pertinent data on airframe and seat behavior were obtained. Concurrent with the experimental program, analytical methods were developed to help predict structural behavior during impact. The effects of flight parameters at impact on cabin deceleration pulses at the seat/occupant interface, experimental and analytical correlation of data on load-limiting subfloor and seat configurations, airplane section test results for computer modeling validation, and data from emergency-locator-transmitter (ELT) investigations to determine probable cause of false alarms and nonactivations are assessed. Computer programs which provide designers with analytical methods for predicting accelerations, velocities, and displacements of collapsing structures are also discussed.
Biodiversity impact assessment (BIA+) - methodological framework for screening biodiversity.
Winter, Lisa; Pflugmacher, Stephan; Berger, Markus; Finkbeiner, Matthias
2018-03-01
For the past 20 years, the life cycle assessment (LCA) community has sought to integrate impacts on biodiversity into the LCA framework. However, existing impact assessment methods still fail to do so comprehensively because they quantify only a few impacts related to specific species and regions. This paper proposes a methodological framework that will allow LCA practitioners to assess currently missing impacts on biodiversity on a global scale. Building on existing models that seek to quantify the impacts of human activities on biodiversity, the herein proposed methodological framework consists of 2 components: a habitat factor for 14 major habitat types and the impact on the biodiversity status in those major habitat types. The habitat factor is calculated by means of indicators that characterize each habitat. The biodiversity status depends on parameters from impact categories. The impact functions, relating these different parameters to a given response in the biodiversity status, rely on expert judgments. To ensure the applicability for LCA practitioners, the components of the framework can be regionalized on a country scale for which LCA inventory data is more readily available. The weighting factors for the 14 major habitat types range from 0.63 to 1.82. By means of area weighting of the major habitat types in a country, country-specific weighting factors are calculated. In order to demonstrate the main part of the framework, examples of impact functions are given for the categories "freshwater eutrophication" and "freshwater ecotoxicity" in 1 major habitat type. The results confirm suitability of the methodological framework. The major advantages are the framework's user-friendliness, given that data can be used from LCA databases directly, and the complete inclusion of all levels of biodiversity (genetic, species, and ecosystem). It is applicable for the whole world and a wide range of impact categories. Integr Environ Assess Manag 2018;14:282-297. © 2017 SETAC. © 2017 SETAC.
Sangil, Carlos; Martín-García, Laura; Clemente, Sabrina
2013-11-15
In this paper we develop a tool to assess the impact of fishing on ecosystem functioning in shallow rocky reefs. The relationships between biological parameters (fishes, sea urchins, seaweeds), and fishing activities (fish traps, boats, land-based fishing, spearfishing) were tested in La Palma island (Canary Islands). Data from fishing activities and biological parameters were analyzed using principal component analyses. We produced two models using the first component of these analyses. This component was interpreted as a new variable that described the fishing pressure and the conservation status at each studied site. Subsequently the scores on the first axis were mapped using universal kriging methods and the models obtained were extrapolated across the whole island to display the expected fishing pressure and conservation status more widely. The fishing pressure and conservation status models were spatially related; zones where fishing pressure was high coincided with zones in the unhealthiest ecological state. Copyright © 2013 Elsevier Ltd. All rights reserved.
Impact of freezing and thawing on the quality of meat: review.
Leygonie, Coleen; Britz, Trevor J; Hoffman, Louwrens C
2012-06-01
This comprehensive review describes the effects of freezing and thawing on the physical quality parameters of meat. The formation of ice crystals during freezing damages the ultrastructure and concentrates the solutes in the meat which, in turn, leads to alterations in the biochemical reactions that occur at the cellular level and influence the physical quality parameters of the meat. The quality parameters that were evaluated are moisture loss, protein denaturation, lipid and protein oxidation, colour, pH, shear force and microbial spoilage. Additionally mechanisms employed to mitigate the effects of freezing and thawing were also reviewed. These include the use of novel methods of freezing and thawing, ante and post mortem antifreeze protein inclusion and vitamin E supplementation, brine injection and modified atmospheric packaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.
2015-06-01
In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowińska, Małgorzata, E-mail: malgorzata.sowinska@b-tu.de; Henkel, Karsten; Schmeißer, Dieter
2016-01-15
The process parameters' impact of the plasma-enhanced atomic layer deposition (PE-ALD) method on the oxygen to nitrogen (O/N) ratio in titanium oxynitride (TiO{sub x}N{sub y}) films was studied. Titanium(IV)isopropoxide in combination with NH{sub 3} plasma and tetrakis(dimethylamino)titanium by applying N{sub 2} plasma processes were investigated. Samples were characterized by the in situ spectroscopic ellipsometry, x-ray photoelectron spectroscopy, and electrical characterization (current–voltage: I-V and capacitance–voltage: C-V) methods. The O/N ratio in the TiO{sub x}N{sub y} films is found to be very sensitive for their electric properties such as conductivity, dielectric breakdown, and permittivity. Our results indicate that these PE-ALD film propertiesmore » can be tuned, via the O/N ratio, by the selection of the process parameters and precursor/coreactant combination.« less
Cui, Shihai; Li, Haiyan; Li, Xiangnan; Ruan, Jesse
2015-01-01
Brain tissue mechanical properties are of importance to investigate child head injury using finite element (FE) method. However, these properties used in child head FE model normally vary in a large range in published literatures because of the insufficient child cadaver experiments. In this work, a head FE model with detailed anatomical structures is developed from the computed tomography (CT) data of a 6-year-old healthy child head. The effects of brain tissue mechanical properties on traumatic brain response are also analyzed by reconstruction of a head impact on engine hood according to Euro-NCAP testing regulation using FE method. The result showed that the variations of brain tissue mechanical parameters in linear viscoelastic constitutive model had different influences on the intracranial response. Furthermore, the opposite trend was obtained in the predicted shear stress and shear strain of brain tissues caused by the variations of mentioned parameters.
On the effects of subsurface parameters on evaporite dissolution (Switzerland)
NASA Astrophysics Data System (ADS)
Zidane, Ali; Zechner, Eric; Huggenberger, Peter; Younes, Anis
2014-05-01
Uncontrolled subsurface evaporite dissolution could lead to hazards such as land subsidence. Observed subsidences in a study area of Northwestern Switzerland were mainly due to subsurface dissolution (subrosion) of evaporites such as halite and gypsum. A set of 2D density driven flow simulations were evaluated along 1000 m long and 150 m deep 2D cross sections within the study area that is characterized by tectonic horst and graben structures. The simulations were conducted to study the effect of the different subsurface parameters that could affect the dissolution process. The heterogeneity of normal faults and its impact on the dissolution of evaporites is studied by considering several permeable faults that include non-permeable areas. The mixed finite element method (MFE) is used to solve the flow equation, coupled with the multipoint flux approximation (MPFA) and the discontinuous Galerkin method (DG) to solve the diffusion and the advection parts of the transport equation.
NASA Astrophysics Data System (ADS)
Liao, Q.; Tchelepi, H.; Zhang, D.
2015-12-01
Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.
NASA Astrophysics Data System (ADS)
Weisz, Elisabeth; Smith, William L.; Smith, Nadia
2013-06-01
The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.
Impact and Collisional Processes in the Solar System
NASA Technical Reports Server (NTRS)
Ahrens, Thomas J.
2001-01-01
In the past year, we have successfully developed the techniques necessary to conduct impact experiments on ice at very low temperatures. We employ the method of embedding gauges within a target to measure the shock wave and material properties. This means that our data are not model dependent; we directly measure the essential parameters needed for numerical simulations of impact cratering. Since then we have developed a new method for temperature control of icy targets that ensures temperature equilibrium throughout a porous target. Graduate student, Sarah Stewart-Mukhopadhyay, is leading the work on ices and porous materials as the main thrust of her thesis research. Our previous work has focused on icy materials with no porosity, and we propose to extend our research to include porous ice and porous ice-silicate mixtures. There is little shockwave data for porous ice, and none of the data was acquired under conditions applicable to the outer solar system. The solid ice Hugoniot is only defined for initial temperatures above -20 C. Our program uniquely measures the properties of ice at temperatures directly applicable to the solar system. Previous experiments were conducted at ambient temperatures soon after removing the target from a cold environment, usually just below freezing, or in a room just below freezing. Since ice has an extremely complicated phase diagram, it is important to conduct experiments at lower temperatures to determine the true outcome of impacts in the outer solar system. This research is complementary to other programs on icy materials. Our work focuses on the inherent material properties by measuring the shock wave directly; this complements the macroscopic observations and immediately provides the parameters necessary to extend this research to the gravity regime. Our numerical simulations of impacts in porous ice under very low gravity conditions, such as found on comets, show that the final crater size and shape is very dependent on the dynamic strength of the material.
Trask, Catherine; Mathiassen, Svend Erik; Wahlström, Jens; Heiden, Marina; Rezagholi, Mahmoud
2012-06-27
Documentation of posture measurement costs is rare and cost models that do exist are generally naïve. This paper provides a comprehensive cost model for biomechanical exposure assessment in occupational studies, documents the monetary costs of three exposure assessment methods for different stakeholders in data collection, and uses simulations to evaluate the relative importance of cost components. Trunk and shoulder posture variables were assessed for 27 aircraft baggage handlers for 3 full shifts each using three methods typical to ergonomic studies: self-report via questionnaire, observation via video film, and full-shift inclinometer registration. The cost model accounted for expenses related to meetings to plan the study, administration, recruitment, equipment, training of data collectors, travel, and onsite data collection. Sensitivity analyses were conducted using simulated study parameters and cost components to investigate the impact on total study cost. Inclinometry was the most expensive method (with a total study cost of € 66,657), followed by observation (€ 55,369) and then self report (€ 36,865). The majority of costs (90%) were borne by researchers. Study design parameters such as sample size, measurement scheduling and spacing, concurrent measurements, location and travel, and equipment acquisition were shown to have wide-ranging impacts on costs. This study provided a general cost modeling approach that can facilitate decision making and planning of data collection in future studies, as well as investigation into cost efficiency and cost efficient study design. Empirical cost data from a large field study demonstrated the usefulness of the proposed models.
Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente
2016-05-01
Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.
Deng, Nina; Anatchkova, Milena D; Waring, Molly E; Han, Kyung T; Ware, John E
2015-08-01
The Quality-of-life (QOL) Disease Impact Scale (QDIS(®)) standardizes the content and scoring of QOL impact attributed to different diseases using item response theory (IRT). This study examined the IRT invariance of the QDIS-standardized IRT parameters in an independent sample. The differential functioning of items and test (DFIT) of a static short-form (QDIS-7) was examined across two independent sources: patients hospitalized for acute coronary syndrome (ACS) in the TRACE-CORE study (N = 1,544) and chronically ill US adults in the QDIS standardization sample. "ACS-specific" IRT item parameters were calibrated and linearly transformed to compare to "standardized" IRT item parameters. Differences in IRT model-expected item, scale and theta scores were examined. The DFIT results were also compared in a standard logistic regression differential item functioning analysis. Item parameters estimated in the ACS sample showed lower discrimination parameters than the standardized discrimination parameters, but only small differences were found for thresholds parameters. In DFIT, results on the non-compensatory differential item functioning index (range 0.005-0.074) were all below the threshold of 0.096. Item differences were further canceled out at the scale level. IRT-based theta scores for ACS patients using standardized and ACS-specific item parameters were highly correlated (r = 0.995, root-mean-square difference = 0.09). Using standardized item parameters, ACS patients scored one-half standard deviation higher (indicating greater QOL impact) compared to chronically ill adults in the standardization sample. The study showed sufficient IRT invariance to warrant the use of standardized IRT scoring of QDIS-7 for studies comparing the QOL impact attributed to acute coronary disease and other chronic conditions.
Comparison of detrending methods for fluctuation analysis in hydrology
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David
2011-03-01
SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shuai; Xiong, Lihua; Li, Hong-Yi
2015-05-26
Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less
Passi, Deepak; Pal, Uma Shankar; Mohammad, Shadab; Singh, Rakesh Kumar; Mehrotra, Divya; Singh, Geeta; Kumar, Manoj; Chellappa, Arul A.L.; Gupta, Chandan
2013-01-01
Background The aim of this study was to assess the feasibility of Er: YAG laser in bone cutting for removal of impacted lower third molar teeth and compare its outcomes with that of surgical bur. Materials & methods The study comprised 40 subjects requiring removal of impacted mandibular third molar, randomly categorized into two equal groups of 20 each, who had their impacted third molar removed either using Er: YAG laser or surgical bur as per their group, using standard methodology of extraction of impacted teeth. Clinical parameters like pain, bleeding, time taken for bone cutting, postoperative swelling, trismus, wound healing and complications were compared for both groups. Observation & result Clinical parameters like pain, bleeding and swelling were lower in laser group than bur group, although the difference was statistically not significant. However, postoperative swelling showed significant difference in the two groups. Laser group required almost double the time taken for bone cutting with bur. Trismus persisted for a longer period in laser group. Wound healing and complications were assessed clinically and there was no significant difference in both the groups. Conclusion Based on the results of our study, the possibility of bone cutting using lasers is pursued, the osteotomy is easily performed and the technique is better suited to minimally invasive surgical procedures. The use of Er: YAG laser may be considered as an alternative tool to surgical bur, specially in anxious patients. PMID:25737885
Tse, Kwong Ming; Tan, Long Bin; Lee, Shu Jin; Lim, Siak Piang; Lee, Heow Pueh
2015-06-01
In spite of anatomic proximity of the facial skeleton and cranium, there is lack of information in the literature regarding the relationship between facial and brain injuries. This study aims to correlate brain injuries with facial injuries using finite element method (FEM). Nine common impact scenarios of facial injuries are simulated with their individual stress wave propagation paths in the facial skeleton and the intracranial brain. Fractures of cranio-facial bones and intracranial injuries are evaluated based on the tolerance limits of the biomechanical parameters. General trend of maximum intracranial biomechanical parameters found in nasal bone and zygomaticomaxillary impacts indicates that severity of brain injury is highly associated with the proximity of location of impact to the brain. It is hypothesized that the midface is capable of absorbing considerable energy and protecting the brain from impact. The nasal cartilages dissipate the impact energy in the form of large scale deformation and fracture, with the vomer-ethmoid diverging stress to the "crumpling zone" of air-filled sphenoid and ethmoidal sinuses; in its most natural manner, the face protects the brain. This numerical study hopes to provide surgeons some insight in what possible brain injuries to be expected in various scenarios of facial trauma and to help in better diagnosis of unsuspected brain injury, thereby resulting in decreasing the morbidity and mortality associated with facial trauma. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rapid impact testing for quantitative assessment of large populations of bridges
NASA Astrophysics Data System (ADS)
Zhou, Yun; Prader, John; DeVitis, John; Deal, Adrienne; Zhang, Jian; Moon, Franklin; Aktan, A. Emin
2011-04-01
Although the widely acknowledged shortcomings of visual inspection have fueled significant advances in the areas of non-destructive evaluation and structural health monitoring (SHM) over the last several decades, the actual practice of bridge assessment has remained largely unchanged. The authors believe the lack of adoption, especially of SHM technologies, is related to the 'single structure' scenarios that drive most research. To overcome this, the authors have developed a concept for a rapid single-input, multiple-output (SIMO) impact testing device that will be capable of capturing modal parameters and estimating flexibility/deflection basins of common highway bridges during routine inspections. The device is composed of a trailer-mounted impact source (capable of delivering a 50 kip impact) and retractable sensor arms, and will be controlled by an automated data acquisition, processing and modal parameter estimation software. The research presented in this paper covers (a) the theoretical basis for SISO, SIMO and MIMO impact testing to estimate flexibility, (b) proof of concept numerical studies using a finite element model, and (c) a pilot implementation on an operating highway bridge. Results indicate that the proposed approach can estimate modal flexibility within a few percent of static flexibility; however, the estimated modal flexibility matrix is only reliable for the substructures associated with the various SIMO tests. To overcome this shortcoming, a modal 'stitching' approach for substructure integration to estimate the full Eigen vector matrix is developed, and preliminary results of these methods are also presented.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
NASA Technical Reports Server (NTRS)
Norbury, John W.
1989-01-01
Previous analyses of the comparison of Weizsacker-Williams (WW) theory to experiment for nucleon emission via electromagnetic (EM) excitations in nucleus-nucleus collisions were not definitive because of different assumptions concerning the value of the minimum impact parameter. This situation is corrected by providing criteria that allows definitive statements to be made concerning agreement or disagreement between WW theory and experiment.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
NASA Astrophysics Data System (ADS)
Knouz, Najat; Boudhar, Abdelghani; Bachaoui, El Mostafa
2016-04-01
Fresh water is the condition of all life on Earth for its vital role in the survival of living beings and in the social, economic and technological development. The Groundwater, as the surface water, is increasingly threatened by agricultural and industrial pollution. In this respect, the groundwater vulnerability assessment to pollution is a very valuable tool for resource protection, management of its quality and uses it in a sustainable way. The main objective of this study is the evaluation of groundwater vulnerability to pollution of the study area, Beni Amir, located in the first irrigated perimeter of Morocco, Tadla, using the DRASTIC method (depth to water, net recharge, aquifer media, soil media, Topography, impact of Vadose zone and hydraulic conductivity), and assessing the impact of each parameter on the DRASTIC vulnerability index by a sensitivity analysis. This study also highlights the role of geographic information systems (GIS) in assessing vulnerability. The Vulnerability index is calculated as the sum of product of ratings and weights assigned to each of the parameter DRASTIC. The results revealed four vulnerability classes, 7% of the study area has a high vulnerability, 31% are moderately vulnerable, 57% have a low vulnerability and 5% are of very low vulnerability.
Effects of sterilization treatments on the analysis of TOC in water samples.
Shi, Yiming; Xu, Lingfeng; Gong, Dongqin; Lu, Jun
2010-01-01
Decomposition experiments conducted with and without microbial processes are commonly used to study the effects of environmental microorganisms on the degradation of organic pollutants. However, the effects of biological pretreatment (sterilization) on organic matter often have a negative impact on such experiments. Based on the principle of water total organic carbon (TOC) analysis, the effects of physical sterilization treatments on determination of TOC and other water quality parameters were investigated. The results revealed that two conventional physical sterilization treatments, autoclaving and 60Co gamma-radiation sterilization, led to the direct decomposition of some organic pollutants, resulting in remarkable errors in the analysis of TOC in water samples. Furthermore, the extent of the errors varied with the intensity and the duration of sterilization treatments. Accordingly, a novel sterilization method for water samples, 0.45 microm micro-filtration coupled with ultraviolet radiation (MCUR), was developed in the present study. The results indicated that the MCUR method was capable of exerting a high bactericidal effect on the water sample while significantly decreasing the negative impact on the analysis of TOC and other water quality parameters. Before and after sterilization treatments, the relative errors of TOC determination could be controlled to lower than 3% for water samples with different categories and concentrations of organic pollutants by using MCUR.
Muir, W M; Howard, R D
2001-07-01
Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Integrating artificial and human intelligence into tablet production process.
Gams, Matjaž; Horvat, Matej; Ožek, Matej; Luštrek, Mitja; Gradišek, Anton
2014-12-01
We developed a new machine learning-based method in order to facilitate the manufacturing processes of pharmaceutical products, such as tablets, in accordance with the Process Analytical Technology (PAT) and Quality by Design (QbD) initiatives. Our approach combines the data, available from prior production runs, with machine learning algorithms that are assisted by a human operator with expert knowledge of the production process. The process parameters encompass those that relate to the attributes of the precursor raw materials and those that relate to the manufacturing process itself. During manufacturing, our method allows production operator to inspect the impacts of various settings of process parameters within their proven acceptable range with the purpose of choosing the most promising values in advance of the actual batch manufacture. The interaction between the human operator and the artificial intelligence system provides improved performance and quality. We successfully implemented the method on data provided by a pharmaceutical company for a particular product, a tablet, under development. We tested the accuracy of the method in comparison with some other machine learning approaches. The method is especially suitable for analyzing manufacturing processes characterized by a limited amount of data.
Reticulation of Aqueous Polyurethane Systems Controlled by DSC Method
Cakic, Suzana; Lacnjevac, Caslav; Rajkovic, Milos B.; Raskovic, Ljiljana; Stamenkovic, Jakov
2006-01-01
The DSC method has been employed to monitor the kinetics of reticulation of aqueous polyurethane systems without catalysts, and with the commercial catalyst of zirconium (CAT®XC-6212) and the highly selective manganese catalyst, the complex Mn(III)-diacetylacetonemaleinate (MAM). Among the polyol components, the acrylic emulsions were used for reticulation in this research, and as suitable reticulation agents the water emulsible aliphatic polyisocyanates based on hexamethylendoisocyanate with the different contents of NCO-groups were employed. On the basis of DSC analysis, applying the methods of Kissinger, Freeman-Carroll and Crane-Ellerstein the pseudo kinetic parameters of the reticulation reaction of aqueous systems were determined. The temperature of the examination ranged from 50°C to 450°C with the heat rate of 0.5°C/min. The reduction of the activation energy and the increase of the standard deviation indicate the catalytic action of the selective catalysts of zirconium and manganese. The impact of the catalysts on the reduction of the activation energy is the strongest when using the catalysts of manganese and applying all the three afore-said methods. The least aberrations among the stated methods in defining the kinetic parameters were obtained by using the manganese catalyst.
Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen
2018-06-01
In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.
Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze
2018-04-01
Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
NASA Astrophysics Data System (ADS)
Badziak, J.; Kucharik, M.; Liska, R.
2018-02-01
The generation of high-pressure shocks in the newly proposed collider in which the projectile impacting a solid target is driven by the laser-induced cavity pressure acceleration (LICPA) mechanism is investigated using two-dimensional hydrodynamic simulations. The dependence of parameters of the shock generated in the target by the impact of a gold projectile on the impacted target material and the laser driver energy is examined. It is found that both in case of low-density (CH, Al) and high-density (Au, Cu) solid targets the shock pressures in the sub-Gbar range can be produced in the LICPA-driven collider with the laser energy of only a few hundreds of joules, and the laser-to-shock energy conversion efficiency can reach values of 10 - 20 %, by an order of magnitude higher than the conversion efficiencies achieved with other laser-based methods used so far.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.
Bulut, Umit
2017-06-01
As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.
Improvement of basal conditions knowledge in Antarctica using data assimilation methods
NASA Astrophysics Data System (ADS)
Mosbeux, C.; Gillet-Chaulet, F.; Gagliardini, O.
2017-12-01
The current global warming seems to have direct consequences on ice-sheet mass loss. Unfortunately, as highlighted in the last IPCC report, current ice-sheets models face several difficulties in assessing the future evolution of the dynamics of ice sheets for the next century. Indeed, projections are still plagued with high uncertainties partially due to the poor representation of occurring physical processes, but also due to the poor initialisation of ice flow models. More specifically, simulations are very sensitive to initial parameters such as the basal friction between ice-sheet and bedrock and the bedrock topography which are still badly known because of a lack of direct observations or large uncertainty on measurements. Improving the knowledge of these two parameters in Greenland and Antarctica is therefore a prerequisite for making reliable projections. Data assimilation methods have been developed in order to overcome this problem such as the Bayesian approach of Pralong and Gudmundsson (2009) or the adjoint method tested by Goldberg and Heimbach (2013) and Perego et al. (2014). The present work is based on two different assimilation algorithms to better constrain both basal drag and bedrock elevation parameters. The first algorithm is entirely based on the adjoint method while the second one uses an iterative method coupling inversion of basal friction based on an adjoint method and through an inversion of bedrock topography using a nudging method. Both algorithms have been implemented in the finite element ice sheet and ice flow model Elmer/Ice and have been tested in a twin experiment showing a clear improvement of both parameters knowledge (Mosbeux et al., 2016). Here, the methods are applied to a real 3D case in East Antarctica and with an ensemble method approach. The application of both algorithms reduces the uncertainty on basal conditions, for instance by providing more details to the basal geometry when compared to usual DEM. Moreover, as in the previous experiment, the reconstruction of both basal elevation and basal friction significantly decreases ice flux divergence anomalies when compared to classical methods where only the friction is inverted. Finally, we conduct prognostic simulations, allowing to assess the impact of the different initialisations obtained with the ensemble method.
Optimization of the fiber laser parameters for local high-temperature impact on metal
NASA Astrophysics Data System (ADS)
Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.
2016-11-01
This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.
Bassuoni, M.M.
2013-01-01
The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and −5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio. PMID:25685485
Temporal variation and scaling of parameters for a monthly hydrologic model
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang
2018-03-01
The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.
Sanborn, B.; Song, B.; Nishida, E.
2017-11-02
In order to understand interfacial interaction of a bi-material during an impact loading event, the dynamic friction coefficient is one of the key parameters that must be characterized and quantified. In this study, a new experimental method to determine the dynamic friction coefficient between two metals was developed by using a Kolsky tension bar and a custom-designed friction fixture. Polyvinylidene fluoride (PVDF) force sensors were used to measure the normal force applied to the friction tribo pairs and the friction force was measured with conventional Kolsky tension bar method. To evaluate the technique, the dynamic friction coefficient between 4340 steelmore » and 7075-T6 aluminum was investigated at an impact speed of approximately 8 m/s. Additionally, the dynamic friction coefficient of the tribo pairs with varied surface roughness was also investigated. The data suggest that higher surface roughness leads to higher friction coefficients at the same speed of 8 m/s.« less
Population control methods in stochastic extinction and outbreak scenarios.
Segura, Juan; Hilker, Frank M; Franco, Daniel
2017-01-01
Adaptive limiter control (ALC) and adaptive threshold harvesting (ATH) are two related control methods that have been shown to stabilize fluctuating populations. Large variations in population abundance can threaten the constancy and the persistence stability of ecological populations, which may impede the success and efficiency of managing natural resources. Here, we consider population models that include biological mechanisms characteristic for causing extinctions on the one hand and pest outbreaks on the other hand. These models include Allee effects and the impact of natural enemies (as is typical of forest defoliating insects). We study the impacts of noise and different levels of biological parameters in three extinction and two outbreak scenarios. Our results show that ALC and ATH have an effect on extinction and outbreak risks only for sufficiently large control intensities. Moreover, there is a clear disparity between the two control methods: in the extinction scenarios, ALC can be effective and ATH can be counterproductive, whereas in the outbreak scenarios the situation is reversed, with ATH being effective and ALC being potentially counterproductive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.I.; Tsai, J.J.; Wu, K.H.
2005-07-01
The impacts of the aeration and the agitation on the composting process of synthetic food wastes made of dog food were studied in a laboratory-scale reactor. Two major peaks of CO{sub 2} evolution rate were observed. Each peak represented an independent stage of composting associated with the activities of thermophilic bacteria. CO{sub 2} evolutions known to correlate well with microbial activities and reactor temperatures were fitted successfully to a modified Gompertz equation, which incorporated three biokinetic parameters, namely, CO{sub 2} evolution potential, specific CO{sub 2} evolution rate, and lag phase time. No parameters that describe the impact of operating variablesmore » are involved. The model is only valid for the specified experimental conditions and may look different with others. The effects of operating parameters such as aeration and agitation were studied statistically with multivariate regression technique. Contour plots were constructed using regression equations for the examination of the dependence of CO{sub 2} evolution potentials on aeration and agitation. In the first stage, a maximum CO{sub 2} evolution potential was found when the aeration rate and the agitation parameter were set at 1.75 l/kg solids-min and 0.35, respectively. In the second stage, a maximum existed when the aeration rate and the agitation parameter were set at 1.8 l/kg solids-min and 0.5, respectively. The methods presented here can also be applied for the optimization of large-scale composting facilities that are operated differently and take longer time.« less
NASA Technical Reports Server (NTRS)
Zirin, R. M.; Witmer, E. A.
1972-01-01
An approximate collision analysis, termed the collision-force method, was developed for studying impact-interaction of an engine rotor blade fragment with an initially circular containment ring. This collision analysis utilizes basic mass, material property, geometry, and pre-impact velocity information for the fragment, together with any one of three postulated patterns of blade deformation behavior: (1) the elastic straight blade model, (2) the elastic-plastic straight shortening blade model, and (3) the elastic-plastic curling blade model. The collision-induced forces are used to predict the resulting motions of both the blade fragment and the containment ring. Containment ring transient responses are predicted by a finite element computer code which accommodates the large deformation, elastic-plastic planar deformation behavior of simple structures such as beams and/or rings. The effects of varying the values of certain parameters in each blade-behavior model were studied. Comparisons of predictions with experimental data indicate that of the three postulated blade-behavior models, the elastic-plastic curling blade model appears to be the most plausible and satisfactory for predicting the impact-induced motions of a ductile engine rotor blade and a containment ring against which the blade impacts.
Jiang, Zhinong; Wang, Zijia; Zhang, Jinjie
2017-01-01
Internal combustion engines (ICEs) are widely used in many important fields. The valve train clearance of an ICE usually exceeds the normal value due to wear or faulty adjustment. This work aims at diagnosing the valve clearance fault based on the vibration signals measured on the engine cylinder heads. The non-stationarity of the ICE operating condition makes it difficult to obtain the nominal baseline, which is always an awkward problem for fault diagnosis. This paper overcomes the problem by inspecting the timing of valve closing impacts, of which the referenced baseline can be obtained by referencing design parameters rather than extraction during healthy conditions. To accurately detect the timing of valve closing impact from vibration signals, we carry out a new method to detect and extract the commencement of the impacts. The results of experiments conducted on a twelve-cylinder ICE test rig show that the approach is capable of extracting the commencement of valve closing impact accurately and using only one feature can give a superior monitoring of valve clearance. With the help of this technique, the valve clearance fault becomes detectable even without the comparison to the baseline, and the changing trend of the clearance could be trackable. PMID:29244722
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
WE-G-BRA-05: IROC Houston On-Site Audits and Parameters That Affect Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: To highlight the IROC Houston on-site dosimetry audit program, and to investigate the impact of clinical conditions on the frequency of errors/recommendations noted by IROC Houston. Methods: The results of IROC Houston on-site audits from 2000-present were abstracted and compared to clinical parameters, this included 409 institutions and 1020 linacs. In particular, we investigated the frequency of recommendations versus year, and the impact of repeat visits on the number of recommendations. We also investigated the impact on the number of recommendations of several clinical parameters: the number and age of the linacs, the linac/TPS combination, and the scope ofmore » the QA program. Results: The number of recommendations per institution (3.1 average) has shown decline between 2000 and present, although the number of recommendations per machine (0.89) has not changed. Previous IROC Houston site visits did not Result in fewer recommendations on a repeat visit, but IROC Houston tests have changed substantially during the last 15 years as radiotherapy technology has changed. There was no impact on the number of recommendations based on the number of machines at the institution or the age of a given machine. The fewest recommendations were observed for Varian-Eclipse combinations (0.71 recs/machine), while Elekta- Pinnacle combinations yielded the most (1.62 recs/machine). Finally, in the TG-142 era (post-2010), those institutions that had a QA recommendation (n=77) had significantly more other recommendations (1.83 per institution) than those that had no QA rec (n=12, 1.33 per institution). Conclusion: Establishing and maintaining a successful radiotherapy program is challenging and areas of improvement can routinely be identified. Clinical conditions such as linac-TPS combinations and the establishment of a good QA program impact the frequency of errors/deficiencies identified by IROC Houston during their on-site review process.« less
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Low-Cost Detection of Thin Film Stress during Fabrication
NASA Technical Reports Server (NTRS)
Nabors, Sammy A.
2015-01-01
NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.
Advanced protein crystal growth programmatic sensitivity study
NASA Technical Reports Server (NTRS)
1992-01-01
The purpose of this study is to define the costs of various APCG (Advanced Protein Crystal Growth) program options and to determine the parameters which, if changed, impact the costs and goals of the programs and to what extent. This was accomplished by developing and evaluating several alternate programmatic scenarios for the microgravity Advanced Protein Crystal Growth program transitioning from the present shuttle activity to the man tended Space Station to the permanently manned Space Station. These scenarios include selected variations in such sensitivity parameters as development and operational costs, schedules, technology issues, and crystal growth methods. This final report provides information that will aid in planning the Advanced Protein Crystal Growth Program.
On the Stark broadening of Cr VI spectral lines in astrophysical plasma
NASA Astrophysics Data System (ADS)
Dimitrijević, M. S.; Simić, Z.; Sahal-Bréchot, S.
2017-02-01
Stark broadening parameters for Cr VI lines have been calculated using semiclassical perturbation method for conditions of interest for stellar plasma. Here are presented, as an example of obtained results, Stark broadening parameters for electron- and proton-impact broadening for Cr VI 4s 2S-4p 2P° λ = 1430 Å and Cr VI 4p 2P°-5s 2S λ = 611.8 Å multiplets. The obtained results are used to demonstrate the importance of Stark broadening of Cr VI in DO white dwarf atmospheres. Also the obtained results will enter in STARK-B database which is included in Virtual Atomic and Molecula Data Center - VAMDC.
On the theory of multi-pulse vibro-impact mechanisms
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Metrikin, V. S.; Nikiforova, I. V.; Ipatov, A. A.
2017-11-01
This paper presents a mathematical model of a new multi-striker eccentric shock-vibration mechanism with a crank-sliding bar vibration exciter and an arbitrary number of pistons. Analytical solutions for the parameters of the model are obtained to determine the regions of existence of stable periodic motions. Under the assumption of an absolutely inelastic collision of the piston, we derive equations that single out a bifurcational unattainable boundary in the parameter space, which has a countable number of arbitrarily complex stable periodic motions in its neighbourhood. We present results of numerical simulations, which illustrate the existence of periodic and stochastic motions. The methods proposed in this paper for investigating the dynamical characteristics of the new crank-type conrod mechanisms allow practitioners to indicate regions in the parameter space, which allow tuning these mechanisms into the most efficient periodic mode of operation, and to effectively analyze the main changes in their operational regimes when the system parameters are changed.
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Impact factor: Universalism and reliability of assessment.
Grzybowski, Andrzej; Patryn, Rafał
In 1955, Eugene Garfield (1925-1917) published a paper in Science where for the first time he advocated the necessity of introducing parameters to assess the quality of scientific journals. Underlying this necessity was an observation of a trend where the whole area of influence in academic publishing was dominated by a narrow group of large interdisciplinary research journals. For this reason, along with Irving H. Sher, they created the impact factor (IF), also called the Garfield impact factor, journal citation rate, journal influence, and journal impact factor. The concept of IF concerns a research discipline called bibliometrics, which uses mathematical and statistical methods to analyze scientific publications. Established by Garfield in 1963, the Science Citation Index, a record of scientific publications and citations therein, contributed directly to the increased importance of this method. Since the 1960s, the register of scientific publications has expanded and their evaluation by the IF has become a fundamental and universal measure of the journal's value. Contrary to the authors' intentions in the creation of the index (IF), it is often used to assess the quality of contributions, simultaneously assessing the authors' achievements or academic career and academic institutions' funding possibilities. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kerschbaum, M.; Hopmann, C.
2016-06-01
The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.
Impacts of relative permeability on CO2 phase behavior, phase distribution, and trapping mechanisms
NASA Astrophysics Data System (ADS)
Moodie, N.; McPherson, B. J. O. L.; Pan, F.
2015-12-01
A critical aspect of geologic carbon storage, a carbon-emissions reduction method under extensive review and testing, is effective multiphase CO2 flow and transport simulation. Relative permeability is a flow parameter particularly critical for accurate forecasting of multiphase behavior of CO2 in the subsurface. The relative permeability relationship assumed and especially the irreducible saturation of the gas phase greatly impacts predicted CO2 trapping mechanisms and long-term plume migration behavior. A primary goal of this study was to evaluate the impact of relative permeability on efficacy of regional-scale CO2 sequestration models. To accomplish this we built a 2-D vertical cross-section of the San Rafael Swell area of East-central Utah. This model simulated injection of CO2 into a brine aquifer for 30 years. The well was then shut-in and the CO2 plume behavior monitored for another 970 years. We evaluated five different relative permeability relationships to quantify their relative impacts on forecasted flow results of the model, with all other parameters maintained uniform and constant. Results of this analysis suggest that CO2 plume movement and behavior are significantly dependent on the specific relative permeability formulation assigned, including the assumed irreducible saturation values of CO2 and brine. More specifically, different relative permeability relationships translate to significant differences in CO2 plume behavior and corresponding trapping mechanisms.
Balancing energy development and conservation: A method utilizing species distribution models
Jarnevich, C.S.; Laubhan, M.K.
2011-01-01
Alternative energy development is increasing, potentially leading to negative impacts on wildlife populations already stressed by other factors. Resource managers require a scientifically based methodology to balance energy development and species conservation, so we investigated modeling habitat suitability using Maximum Entropy to develop maps that could be used with other information to help site energy developments. We selected one species of concern, the Lesser Prairie-Chicken (LPCH; Tympanuchus pallidicinctus) found on the southern Great Plains of North America, as our case study. LPCH populations have been declining and are potentially further impacted by energy development. We used LPCH lek locations in the state of Kansas along with several environmental and anthropogenic parameters to develop models that predict the probability of lek occurrence across the landscape. The models all performed well as indicated by the high test area under the curve (AUC) scores (all >0.9). The inclusion of anthropogenic parameters in models resulted in slightly better performance based on AUC values, indicating that anthropogenic features may impact LPCH lek habitat suitability. Given the positive model results, this methodology may provide additional guidance in designing future survey protocols, as well as siting of energy development in areas of marginal or unsuitable habitat for species of concern. This technique could help to standardize and quantify the impacts various developments have upon at-risk species. ?? 2011 Springer Science+Business Media, LLC (outside the USA).
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
NASA Astrophysics Data System (ADS)
Maneta, M. P.; Howitt, R.; Kimball, J. S.
2013-12-01
Agricultural activity can exacerbate or buffer the impact of climate variability, especially droughts, on the hydrologic and socioeconomic conditions of rural areas. Potential negative regional impacts of droughts include impoverishment of agricultural regions, deterioration or overuse of water resources, risk of monoculture, and regional dependence on external food markets. Policies that encourage adequate management practices in the face of adverse climatic events are critical to preserve rural livelihoods and to ensure a sustainable future for agriculture. Diagnosing and managing drought effects on agricultural production, on the social and natural environment, and on limited water resources, is highly complex and interdisciplinary. The challenges that decision-makers face to mitigate the impact of water shortage are social, agronomic, economic and environmental in nature and therefore must be approached from an integrated multidisciplinary point of view. Existing observation technologies, in conjunction with models and assimilation methods open the opportunity for novel interdisciplinary analysis tools to support policy and decision making. We present an integrated modeling and observation framework driven by satellite remote sensing and other ancillary information from regional monitoring networks to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, management decisions and socioeconomic policy. The core of this framework is a hydroeconomic model of agricultural production that assimilates remote sensing inputs to quantify the amount of land, water, fertilizer and labor farmers allocate for each crop they choose to grow on a seasonal basis in response to changing climatic conditions, including drought. A regional hydroclimatologic model provides biophysical constraints to an economic model of agricultural production based on a class of models referred to as positive mathematical programming (PMP). A recursive Bayesian update method is used to adjust the model parameters by assimilating information on crop acreage, production, and crop evapotranspiration estimated from high-spatial resolution satellite remote sensing. We are developing new land parameter records adapted for agricultural application by merging relatively fine scale, calibrated spectral reflectance time series with similar spectral information from coarser scale and more temporally continuous global satellite data records. These new products will be used to generate field scale estimates of LAI and FPAR, which will be used with regional surface meteorology and biophysical data to estimate crop production including C4 crop types. This integrated framework provides an operational means to monitor and forecast what crops will be grown and how farmers will allocate land, water and other agricultural resources under expected adverse conditions, and the resulting consequences for other water users. It will also permit evaluation of impacts of water policy and changes in food prices on rural community livelihoods. The Bayesian update framework constitutes an efficient method for the identification of the production function parameters and provides valuable information on the associated uncertainty of the forecasts.
NASA Astrophysics Data System (ADS)
Borfecchia, Flavio; Micheli, Carla; Belmonte, Alessandro; De Cecco, Luigi; Sannino, Gianmaria; Bracco, Giovanni; Mattiazzo, Giuliana; Vittoria Struglia, Maria
2016-04-01
Marine renewable energy extraction plays a key role both in energy security of small islands and in mitigation of climate change, but at the same time poses the important question of monitoring the effects of the interaction of such devices with the marine environment. In this work we present a new methodology, integrating satellite remote sensing techniques with in situ observations and biophysical parameters analysis, for the monitoring and mapping of Posidonia Oceanica (PO) meadows in shallow coastal waters. This methodology has been applied to the coastal area offshore Pantelleria Island (Southern Mediterranean) where the first Italian Inertial Sea Wave Energy Converter (ISWEC) prototype has been recently installed. The prototype, developed by the Polytechnic of Turin consists of a platform 8 meters wide, 15 meters long and 4.5 meters high, moored at about 800 meters from the shore and at 31 m depth. It is characterized by high conversion efficiency, resulting from its adaptability to different wave conditions, and a limited environmental impact due to its mooring innovative method with absence of fixed anchors to the seabed. The island of Pantelleria, is characterized by high transparency of coastal waters and PO meadows ecosystems with still significant levels of biodiversity and specific adaptation to accentuated hydrodynamics of these shores. Although ISWEC is a low-impact mooring inertial system able to ensure a reliable connection to the electric grid with minimal impact on seagrass growing in the seabed, the prototype installation and operation involves an interaction with local PO and seagrass meadows and possible water transparency decreasing. In this view monitoring of local PO ecosystem is mandatory in order to allow the detection of potential stress and damages due to ISWEC related activities and/or other factors. However, monitoring and collection of accurate and repetitive information over large areas of the necessary parameters by means of traditional methods (e.g. diving and plants counting), can be difficult and expensive. To overcome these limits we present an integrated methodology for effective monitoring and mapping of PO meadows using satellite/airborne EO (Earth Observation) techniques calibrated by means of sea truth measurements and laboratory genetics analyses. During last summer a sea truth campaign over the areas of interest has been performed and point measurements of several biophysical parameters (biomass, shoot density, cover) related to PO phenology has been acquired by means of original sampling method on the stations distributed along a bathymetry gradient starting from the ISWEC location, at 31 m. of depth. The Landsat 8 OLI with the Sentinel 2 MSI (recently made available within the Copernicus EU program) synchronous satellite multispectral data, including the entire coastal area of interest, were acquired and preprocessed with the objective to test their improved mapping capabilities of PO distribution and related biophysical parameters on the basis of the previously developed operative methods and near synchronous sea truth data. The processed point samples measurements were then exploited for multispectral data calibration, with the support of the statistic and bio-optical modelling approaches to obtain improved thematic maps of the local PO distributions.
NASA Astrophysics Data System (ADS)
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
ERIC Educational Resources Information Center
Garber, Mel; Adams, Katherine R.
2017-01-01
Collective impact is a model for achieving tangible change and improvement in communities through a series of well-defined parameters of collaboration. This article provides a 10-year reflection on the University of Georgia Archway Partnership, a university-community collaboration, in the context of the parameters of collective impact. Emphasis is…
The Penn State Safety Floor: Part I--Design parameters associated with walking deflections.
Casalena, J A; Ovaert, T C; Cavanagh, P R; Streit, D A
1998-08-01
A new flooring system has been developed to reduce peak impact forces to the hips when humans fall. The new safety floor is designed to remain relatively rigid under normal walking conditions, but to deform elastically when impacted during a fall. Design objectives included minimizing peak force experienced by the femur during a fall-induced impact, while maintaining a maximum of 2 mm of floor deflection during walking. Finite Element Models (FEMs) were developed to capture the complex dynamics of impact response between two deformable bodies. Validation of the finite element models included analytical calculations of theoretical buckling column response, experimental quasi-static loading of full-scale flooring prototypes, and flooring response during walking trials. Finite Element Method results compared well with theoretical and experimental data. Both finite element and experimental data suggest that the proposed safety floor can effectively meet the design goal of 2 mm maximum deflection during walking, while effectively reducing impact forces during a fall.
Computational process to study the wave propagation In a non-linear medium by quasi- linearization
NASA Astrophysics Data System (ADS)
Sharath Babu, K.; Venkata Brammam, J.; Baby Rani, CH
2018-03-01
Two objects having distinct velocities come into contact an impact can occur. The impact study i.e., in the displacement of the objects after the impact, the impact force is function of time‘t’ which is behaves similar to compression force. The impact tenure is very short so impulses must be generated subsequently high stresses are generated. In this work we are examined the wave propagation inside the object after collision and measured the object non-linear behavior in the one-dimensional case. Wave transmission is studied by means of material acoustic parameter value. The objective of this paper is to present a computational study of propagating pulsation and harmonic waves in nonlinear media using quasi-linearization and subsequently utilized the central difference scheme. This study gives focus on longitudinal, one- dimensional wave propagation. In the finite difference scheme Non-linear system is reduced to a linear system by applying quasi-linearization method. The computed results exhibit good agreement on par with the selected non-liner wave propagation.
NASA Astrophysics Data System (ADS)
Dioguardi, Fabio; Mele, Daniela
2018-03-01
This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.
NASA Astrophysics Data System (ADS)
Aydemir, Birsen; Kiziler, Ali Riza; Onaran, Ilhan; Alici, Bülent; Özkara, Hamdi; Akyolcu, Mehmet Can
2007-04-01
To investigate the impact of testosterone, zinc, calcium and magnesium concentrations in serum and seminal plasma on sperm parameters. There were significant decrease in sperm parameters, serum and seminal plasma zinc levels in subfertile males. It indicates zinc has a essential role in male infertility; the determination the level of zinc during infertility investigation is recommended.
A holistic approach towards defined product attributes by Maillard-type food processing.
Davidek, Tomas; Illmann, Silke; Rytz, Andreas; Blank, Imre
2013-07-01
A fractional factorial experimental design was used to quantify the impact of process and recipe parameters on selected product attributes of extruded products (colour, viscosity, acrylamide, and the flavour marker 4-hydroxy-2,5-dimethyl-3(2H)-furanone, HDMF). The study has shown that recipe parameters (lysine, phosphate) can be used to modulate the HDMF level without changing the specific mechanical energy (SME) and consequently the texture of the product, while processing parameters (temperature, moisture) impact both HDMF and SME in parallel. Similarly, several parameters, including phosphate level, temperature and moisture, simultaneously impact both HDMF and acrylamide formation, while pH and addition of lysine showed different trends. Therefore, the latter two options can be used to mitigate acrylamide without a negative impact on flavour. Such a holistic approach has been shown as a powerful tool to optimize various product attributes upon food processing.
Nitrogen-rich salts of 5,5‧-bistetrazole-1,1‧-diolate: Syntheses, structures and properties
NASA Astrophysics Data System (ADS)
Yang, Ting; Zhang, Jian-Guo; Zhang, Zhi-Bin; Gozin, Michael
2018-03-01
A series of new nitrogen-rich energetic salts containing 1H,1‧H-[5,5‧-bitetrazole]-1,1‧-diol (BTO) anion and ethane-1,2-diaminium (1), 1-amino-1H-1,2,3-triazol-3-ium (2), 4-amino-4H-1,2,4-triazol-1-ium (3) and 4,5-diamino-4H-1,2,4-triazol-1-ium (4) cations were synthesized by direct salt formation or by metathesis strategy. The structures of energetic salts 1-4 were comprehensively characterized by elemental analysis, mass spectrometry, IR and NMR spectroscopies and by X-ray crystallography. DSC and TGA methods were used to study thermal properties of these salts. Additionally, the non-isothermal kinetic parameters and thermodynamic parameters were calculated by utilizing the Kissinger's and Ozawa-Doyle's methods. The enthalpies of formation for all target compounds in this study were calculated, and their sensitivity to mechanical impact and friction was tested according to BAM guidelines. We found these new energetic salts exhibit good thermal stability and have typical decomposition temperatures above 230 °C, except for the salt 2. All our salts have highly-positive enthalpies of formation (311.1-473.6 kJ mol-1) and are insensitive to impact and friction stimuli (>40 J, 120 N). With a high nitrogen-rich content, high enthalpy of formation, good thermostability and very low sensitivity to impact, some of these new salts may have a potential for application in the field of environmentally friendly insensitive energetic materials.
Wei, Fanan; Yang, Haitao; Liu, Lianqing; Li, Guangyong
2017-03-01
Dynamic mechanical behaviour of living cells has been described by viscoelasticity. However, quantitation of the viscoelastic parameters for living cells is far from sophisticated. In this paper, combining inverse finite element (FE) simulation with Atomic Force Microscope characterization, we attempt to develop a new method to evaluate and acquire trustworthy viscoelastic index of living cells. First, influence of the experiment parameters on stress relaxation process is assessed using FE simulation. As suggested by the simulations, cell height has negligible impact on shape of the force-time curve, i.e. the characteristic relaxation time; and the effect originates from substrate can be totally eliminated when stiff substrate (Young's modulus larger than 3 GPa) is used. Then, so as to develop an effective optimization strategy for the inverse FE simulation, the parameters sensitivity evaluation is performed for Young's modulus, Poisson's ratio, and characteristic relaxation time. With the experiment data obtained through typical stress relaxation measurement, viscoelastic parameters are extracted through the inverse FE simulation by comparing the simulation results and experimental measurements. Finally, reliability of the acquired mechanical parameters is verified with different load experiments performed on the same cell.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
Application of ICME Methods for the Development of Rapid Manufacturing Technologies
NASA Astrophysics Data System (ADS)
Maiwald-Immer, T.; Göhler, T.; Fischersworring-Bunk, A.; Körner, C.; Osmanlic, F.; Bauereiß, A.
Rapid manufacturing technologies are lately gaining interest as alternative manufacturing method. Due to the large parameter sets applicable in these manufacturing methods and their impact on achievable material properties and quality, support of the manufacturing process development by the use of simulation is highly attractive. This is especially true for aerospace applications with their high quality demands and controlled scatter in the resulting material properties. The applicable simulation techniques to these manufacturing methods are manifold. The paper will focus on the melt pool simulation for a SLM (selective laser melting) process which was originally developed for EBM (electron beam melting). It will be discussed in the overall context of a multi-scale simulation within a virtual process chain.
New modeling method for the dielectric relaxation of a DRAM cell capacitor
NASA Astrophysics Data System (ADS)
Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon
2018-02-01
This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.
NASA Astrophysics Data System (ADS)
Li, Dachao; Xu, Qingmei; Liu, Yu; Wang, Ridong; Xu, Kexin; Yu, Haixia
2017-11-01
A high-accuracy microdialysis method that can provide the reference values of glucose concentration in interstitial fluid for the accurate evaluation of non-invasive and minimally invasive continuous glucose monitoring is reported in this study. The parameters of the microdialysis process were firstly optimized by testing and analyzing three main factors that impact microdialysis recovery, including the perfusion rate, temperature, and glucose concentration in the area surrounding the microdialysis probe. The precision of the optimized microdialysis method was then determined in a simulation system that was designed and established in this study to simulate variations in continuous glucose concentration in the human body. Finally, the microdialysis method was tested for in vivo interstitial glucose concentration measurement.
The use of least squares methods in functional optimization of energy use prediction models
NASA Astrophysics Data System (ADS)
Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.
2012-06-01
The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Diana Holford; Locke II, Randall A.; Keating, Elizabeth
The National Risk Assessment Partnership (NRAP) has developed a suite of tools to assess and manage risk at CO2 sequestration sites (1). The NRAP tool suite includes the Aquifer Impact Model (AIM), based on reduced order models developed using site-specific data from two aquifers (alluvium and carbonate). The models accept aquifer parameters as a range of variable inputs so they may have more broad applicability. Guidelines have been developed for determining the aquifer types for which the ROMs should be applicable. This paper considers the applicability of the aquifer models in AIM to predicting the impact of CO2 or Brinemore » leakage were it to occur at the Illinois Basin Decatur Project (IBDP). Based on the results of the sensitivity analysis, the hydraulic parameters and leakage source term magnitude are more sensitive than clay fraction or cation exchange capacity. Sand permeability was the only hydraulic parameter measured at the IBDP site. More information on the other hydraulic parameters, such as sand fraction and sand/clay correlation lengths, could reduce uncertainty in risk estimates. Some non-adjustable parameters, such as the initial pH and TDS and the pH no-impact threshold, are significantly different for the ROM than for the observations at the IBDP site. The reduced order model could be made more useful to a wider range of sites if the initial conditions and no-impact threshold values were adjustable parameters.« less
Quantifying NMR relaxation correlation and exchange in articular cartilage with time domain analysis
NASA Astrophysics Data System (ADS)
Mailhiot, Sarah E.; Zong, Fangrong; Maneval, James E.; June, Ronald K.; Galvosas, Petrik; Seymour, Joseph D.
2018-02-01
Measured nuclear magnetic resonance (NMR) transverse relaxation data in articular cartilage has been shown to be multi-exponential and correlated to the health of the tissue. The observed relaxation rates are dependent on experimental parameters such as solvent, data acquisition methods, data analysis methods, and alignment to the magnetic field. In this study, we show that diffusive exchange occurs in porcine articular cartilage and impacts the observed relaxation rates in T1-T2 correlation experiments. By using time domain analysis of T2-T2 exchange spectroscopy, the diffusive exchange time can be quantified by measurements that use a single mixing time. Measured characteristic times for exchange are commensurate with T1 in this material and so impacts the observed T1 behavior. The approach used here allows for reliable quantification of NMR relaxation behavior in cartilage in the presence of diffusive fluid exchange between two environments.
Computer simulations and experimental study on crash box of automobile in low speed collision
NASA Astrophysics Data System (ADS)
Liu, Yanjie; Ding, Lin; Yan, Shengyuan; Yang, Yongsheng
2008-11-01
Based on the problems of energy-absorbing components in the automobile low speed collision process, according to crash box frontal crash test in low speed as the example, the simulation analysis of crash box impact process was carried out by Hyper Mesh and LS-DYNA. Each parameter on the influence modeling was analyzed by mathematics analytical solution and test comparison, which guaranteed that the model was accurate. Combination of experiment and simulation result had determined the weakness part of crash box structure crashworthiness aspect, and improvement method of crash box crashworthiness was discussed. Through numerical simulation of the impact process of automobile crash box, the obtained analysis result was used to optimize the design of crash box. It was helpful to improve the vehicles structure and decrease the collision accident loss at most. And it was also provided a useful method for the further research on the automobile collision.
A Method of Effective Quarry Water Purifying Using Artificial Filtering Arrays
NASA Astrophysics Data System (ADS)
Tyulenev, M.; Garina, E.; Khoreshok, A.; Litvin, O.; Litvin, Y.; Maliukhina, E.
2017-01-01
The development of open pit mining in the large coal basins of Russia and other countries increases their negative impact on the environment. Along with the damage of land and air pollution by dust and combustion gases of blasting, coal pits have a significant negative impact on water resources. Polluted quarry water worsens the ecological situation on a much larger area than covered by air pollution and land damage. This significantly worsens the conditions of people living in cities and towns located near the coal pits, and complicates the subsequent restoration of the environment, irreversibly destroying the nature. Therefore, the research of quarry wastewater purifying is becoming an important mater for scholars of technical colleges and universities in the regions with developing open-pit mining. This paper describes the method of determining the basic parameters of the artificial filtering arrays formed on coal pits of Kuzbass (Western Siberia, Russia), and gives recommendations on its application.
The ability of flexible car bonnets to mitigate the consequences of frontal impact with pedestrians
NASA Astrophysics Data System (ADS)
Stanisławek, Sebastian; Niezgoda, Tadeusz
2018-01-01
The paper presents the results of numerical research on a vehicle representing a Toyota Yaris passenger sedan hitting a pedestrian. A flexible car body is suggested as an interesting way to increase safety. The authors present a simple low-cost bonnet buffer concept that may mitigate the effects of frontal impact. Computer simulation was the method chosen to solve the problem efficiently. The Finite Element Method (FEM) implemented in the LS-DYNA commercial code was used. The testing procedure was based on the Euro NCAP protocol. A flexible bonnet buffer shows its usefulness in preventing casualties in typical accidents. In the best scenario, the HIC15 parameter is only 380 when such a buffer is installed. In comparison, an accident involving a car without any protection produces an HIC15 of 970, which is very dangerous for pedestrians.
NASA Astrophysics Data System (ADS)
Mughal, Maqsood Ali
Clean and environmentally friendly technologies are centralizing industry focus towards obtaining long term solutions to many large-scale problems such as energy demand, pollution, and environmental safety. Thin film solar cell (TFSC) technology has emerged as an impressive photovoltaic (PV) technology to create clean energy from fast production lines with capabilities to reduce material usage and energy required to manufacture large area panels, hence, lowering the costs. Today, cost ($/kWh) and toxicity are the primary challenges for all PV technologies. In that respect, electrodeposited indium sulfide (In2S3) films are proposed as an alternate to hazardous cadmium sulfide (CdS) films, commonly used as buffer layers in solar cells. This dissertation focuses upon the optimization of electrodeposition parameters to synthesize In2S3 films of PV quality. The work describe herein has the potential to reduce the hazardous impact of cadmium (Cd) upon the environment, while reducing the manufacturing cost of TFSCs through efficient utilization of materials. Optimization was performed through use of a statistical approach to study the effect of varying electrodeposition parameters upon the properties of the films. A robust design method referred-to as the "Taguchi Method" helped in engineering the properties of the films, and improved the PV characteristics including optical bandgap, absorption coefficient, stoichiometry, morphology, crystalline structure, thickness, etc. Current density (also a function of deposition voltage) had the most significant impact upon the stoichiometry and morphology of In2S3 films, whereas, deposition temperature and composition of the solution had the least significant impact. The dissertation discusses the film growth mechanism and provides understanding of the regions of low quality (for example, cracks) in films. In2S3 films were systematically and quantitatively investigated by varying electrodeposition parameters including bath composition, current density, deposition time and temperature, stir rate, and electrode potential. These parameters individually and collectively exhibited significant correlation with the properties of the films. Digital imaging analysis (using fracture and buckling analysis software) of scanning electron microscope (SEM) images helped to quantify the cracks and study the defects in films. In addition, the effects of different annealing treatments (200 oC, 300 oC, and 400 oC in air) and coated-glass substrates (Mo, ITO, FTO) upon the properties of the In2S3 films were analyzed.
2008-01-01
INTRODUCTION The guiding principles for respirator design are to protect the wearer from airborne contaminants and to reduce the human psychophysiological...impacts of reduced FOV on specific daily activities such as rifle firing . 13 17-19 It is anticipated that performance decrements associated with FOV will...34 Data on performance with binoculars and rifle firing will provide limited information on performance of visual tasks 3.3.1.2 Methods. Data from
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
[Determination of Total Iron and Fe2+ in Basalt].
Liu, Jian-xun; Chen, Mei-rong; Jian, Zheng-guo; Wu, Gang; Wu, Zhi-shen
2015-08-01
Basalt is the raw material of basalt fiber. The content of FeO and Fe2O3 has a great impact on the properties of basalt fibers. ICP-OES and dichromate method were used to test total Fe and Fe(2+) in basalt. Suitable instrument parameters and analysis lines of Fe were chosen for ICP-OES. The relative standard deviation (RSD) of ICP-OES is 2.2%, and the recovery is in the range of 98%~101%. The method shows simple, rapid and highly accurate for determination of total Fe and Fe(2+) in basalt. The RSD of ICP-OES and dichromate method is 0.42% and 1.4%, respectively.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical simulation of electron beam welding with beam oscillations
NASA Astrophysics Data System (ADS)
Trushnikov, D. N.; Permyakov, G. L.
2017-02-01
This research examines the process of electron-beam welding in a keyhole mode with the use of beam oscillations. We study the impact of various beam oscillations and their parameters on the shape of the keyhole, the flow of heat and mass transfer processes and weld parameters to develop methodological recommendations. A numerical three-dimensional mathematical model of electron beam welding is presented. The model was developed on the basis of a heat conduction equation and a Navier-Stokes equation taking into account phase transitions at the interface of a solid and liquid phase and thermocapillary convection (Marangoni effect). The shape of the keyhole is determined based on experimental data on the parameters of the secondary signal by using the method of a synchronous accumulation. Calculations of thermal and hydrodynamic processes were carried out based on a computer cluster, using a simulation package COMSOL Multiphysics.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W
2017-05-01
The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.
Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.
Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi
2016-07-01
Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Analysis of flash flood parameters and human impacts in the US from 2006 to 2012
NASA Astrophysics Data System (ADS)
Špitalar, Maruša; Gourley, Jonathan J.; Lutoff, Celine; Kirstetter, Pierre-Emmanuel; Brilly, Mitja; Carr, Nicholas
2014-11-01
Several different factors external to the natural hazard of flash flooding can contribute to the type and magnitude of their resulting damages. Human exposure, vulnerability, fatality and injury rates can be minimized by identifying and then mitigating the causative factors for human impacts. A database of flash flooding was used for statistical analysis of human impacts across the U.S. 21,549 flash flood events were analyzed during a 6-year period from October 2006 to 2012. Based on the information available in the database, physical parameters were introduced and then correlated to the reported human impacts. Probability density functions of the frequency of flash flood events and the PDF of occurrences weighted by the number of injuries and fatalities were used to describe the influence of each parameter. The factors that emerged as the most influential on human impacts are short flood durations, small catchment sizes in rural areas, vehicles, and nocturnal events with low visibility. Analyzing and correlating a diverse range of parameters to human impacts give us important insights into what contributes to fatalities and injuries and further raises questions on how to manage them.
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2015-12-01
Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.
Derivation of global vegetation biophysical parameters from EUMETSAT Polar System
NASA Astrophysics Data System (ADS)
García-Haro, Francisco Javier; Campos-Taberner, Manuel; Muñoz-Marí, Jordi; Laparra, Valero; Camacho, Fernando; Sánchez-Zapero, Jorge; Camps-Valls, Gustau
2018-05-01
This paper presents the algorithm developed in LSA-SAF (Satellite Application Facility for Land Surface Analysis) for the derivation of global vegetation parameters from the AVHRR (Advanced Very High Resolution Radiometer) sensor on board MetOp (Meteorological-Operational) satellites forming the EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Polar System (EPS). The suite of LSA-SAF EPS vegetation products includes the leaf area index (LAI), the fractional vegetation cover (FVC), and the fraction of absorbed photosynthetically active radiation (FAPAR). LAI, FAPAR, and FVC characterize the structure and the functioning of vegetation and are key parameters for a wide range of land-biosphere applications. The algorithm is based on a hybrid approach that blends the generalization capabilities offered by physical radiative transfer models with the accuracy and computational efficiency of machine learning methods. One major feature is the implementation of multi-output retrieval methods able to jointly and more consistently estimate all the biophysical parameters at the same time. We propose a multi-output Gaussian process regression (GPRmulti), which outperforms other considered methods over PROSAIL (coupling of PROSPECT and SAIL (Scattering by Arbitrary Inclined Leaves) radiative transfer models) EPS simulations. The global EPS products include uncertainty estimates taking into account the uncertainty captured by the retrieval method and input errors propagation. A sensitivity analysis is performed to assess several sources of uncertainties in retrievals and maximize the positive impact of modeling the noise in training simulations. The paper discusses initial validation studies and provides details about the characteristics and overall quality of the products, which can be of interest to assist the successful use of the data by a broad user's community. The consistent generation and distribution of the EPS vegetation products will constitute a valuable tool for monitoring of earth surface dynamic processes.
Shapiro effect as a possible cause of the low-frequency pulsar timing noise in globular clusters
NASA Astrophysics Data System (ADS)
Larchenkova, T. I.; Kopeikin, S. M.
2006-01-01
A prolonged timing of millisecond pulsars has revealed low-frequency uncorrelated (infrared) noise, presumably of astrophysical origin, in the pulse arrival time (PAT) residuals for some of them. Currently available pulsar timing methods allow the statistical parameters of this noise to be reliably measured by decomposing the PAT residual function into orthogonal Fourier harmonics. In most cases, pulsars in globular clusters show a low-frequency modulation of their rotational phase and spin rate. The relativistic time delay of the pulsar signal in the curved spacetime of randomly distributed and moving globular cluster stars (the Shapiro effect) is suggested as a possible cause of this modulation. Extremely important (from an astrophysical point of view) information about the structure of the globular cluster core, which is inaccessible to study by other observational methods, could be obtained by analyzing the spectral parameters of the low-frequency noise caused by the Shapiro effect and attributable to the random passages of stars near the line of sight to the pulsar. Given the smallness of the aberration corrections that arise from the nonstationarity of the gravitational field of the randomly distributed ensemble of stars under consideration, a formula is derived for the Shapiro effect for a pulsar in a globular cluster. The derived formula is used to calculate the autocorrelation function of the low-frequency pulsar noise, the slope of its power spectrum, and the behavior of the σz statistic that characterizes the spectral properties of this noise in the form of a time function. The Shapiro effect under discussion is shown to manifest itself for large impact parameters as a low-frequency noise of the pulsar spin rate with a spectral index of n = -1.8 that depends weakly on the specific model distribution of stars in the globular cluster. For small impact parameters, the spectral index of the noise is n = -1.5.
Experimental Verification of an Instrument to Test Flooring Materials
NASA Astrophysics Data System (ADS)
Philip, Rony; Löfgren, Hans, Dr
2018-02-01
The focus of this work is to validate the fluid model with different flooring materials and the measurements of an instrument to test flooring materials and its force attenuating capabilities using mathematical models to describe the signature and coefficients of the floor. The main contribution of the present work focus on the development of a mathematical fluid model for floors. The aim of the thesis was to analyze, compare different floor materials and to study the linear dynamics of falling impacts on floors. The impact of the hammer during a fall is captured by an accelerometer and response is collected using a picoscope. The collected data was analyzed using matlab least square method which is coded as per the fluid model. The finding from this thesis showed that the fluid model works with more elastic model but it doesn’t work for rigid materials like wood. The importance of parameters like velocity, mass, energy loss and other coefficients of floor which influences the model during the impact of falling on floors were identified and a standardized testing method was set.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
SU-E-T-627: Precision Modelling of the Leaf-Bank Rotation in Elekta’s Agility MLC: Is It Necessary?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vujicic, M; Belec, J; Heath, E
Purpose: To demonstrate the method used to determine the leaf bank rotation angle (LBROT) as a parameter for modeling the Elekta Agility multi-leaf collimator (MLC) for Monte Carlo simulations and to evaluate the clinical impact of LBROT. Methods: A detailed model of an Elekta Infinity linac including an Agility MLC was built using the EGSnrc/BEAMnrc Monte Carlo code. The Agility 160-leaf MLC is modelled using the MLCE component module which allows for leaf bank rotation using the parameter LBROT. A precise value of LBROT is obtained by comparing measured and simulated profiles of a specific field, which has leaves arrangedmore » in a repeated pattern such that one leaf is opened and the adjacent one is closed. Profile measurements from an Agility linac are taken with gafchromic film, and an ion chamber is used to set the absolute dose. The measurements are compared to Monte Carlo (MC) simulations and the LBROT is adjusted until a match is found. The clinical impact of LBROT is evaluated by observing how an MC dose calculation changes with LBROT. A clinical Stereotactic Body Radiation Treatment (SBRT) plan is calculated using BEAMnrc/DOSXYZnrc simulations with different input values for LBROT. Results: Using the method outlined above, the LBROT is determined to be 9±1 mrad. Differences as high as 4% are observed in a clinical SBRT plan between the extreme case (LBROT not modeled) and the nominal case. Conclusion: In small-field radiation therapy treatment planning, it is important to properly account for LBROT as an input parameter for MC dose calculations with the Agility MLC. More work is ongoing to elucidate the observed differences by determining the contributions from transmission dose, change in field size, and source occlusion, which are all dependent on LBROT. This work was supported by OCAIRO (Ontario Consortium of Adaptive Interventions in Radiation Oncology), funded by the Ontario Research Fund.« less
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
Impact of dyeing industry effluent on germination and growth of pea (Pisum sativum).
Malaviya, Piyush; Hali, Rajesh; Sharma, Neeru
2012-11-01
Dye industry effluent was analyzed for physico-chemical characteristics and its impact on germination and growth behaviour of Pea (Pisum sativum). The 100% effluent showed high pH (10.3) and TDS (1088 mg l(-1)). The germination parameters included percent germination, delay index, speed of germination, peak value and germination period while growth parameters comprised of root and shoot length, root and shootweight, root-shoot ratio and number of stipules. The study showed the maximum values of positive germination parameters viz. speed of germination (7.85), peak value (3.28), germination index (123.87) and all growth parameters at 20% effluent concentration while the values of negative germination parameters viz. delay index (-0.14) and percent inhibition (-8.34) were found to be minimum at 20% effluent concentration. The study demonstrated that at lower concentrations the dyeing industry effluent caused a positive impact on germination and growth of Pisum sativum.
Impact of Selected Parameters on the Fatigue Strength of Splices on Multiply Textile Conveyor Belts
NASA Astrophysics Data System (ADS)
Bajda, Mirosław; Błażej, Ryszard; Hardygóra, Monika
2016-10-01
Splices are the weakest points in the conveyor belt loop. The strength of these joints, and thus their design as well as the method and quality of splicing, determine the strength of the whole conveyor belt loop. A special zone in a splice exists, where the stresses in the adjacent plies or cables differ considerably from each other. This results in differences in the elongation of these elements and in additional shearing stresses in the rubber layer. The strength of the joints depends on several factors, among others on the parameters of the joined belt, on the connecting layer and the technology of joining, as well as on the materials used to make the joint. The strength of the joint constitutes a criterion for the selection of a belt suitable for the operating conditions, and therefore methods of testing such joints are of great importance. This paper presents the method of testing fatigue strength of splices made on multi-ply textile conveyor belts and the results of these studies.
The qualitative assessment of pneumatic actuators operation in terms of vibration criteria
NASA Astrophysics Data System (ADS)
Hetmanczyk, M. P.; Michalski, P.
2015-11-01
The work quality of pneumatic actuators can be assessed in terms of multiple criteria. In the case of complex systems with pneumatic actuators retained at end positions (with occurrence of piston impact in cylinder covers) the vibration criteria constitute the most reliable indicators. The paper presents an impact assessment on the operating condition of the rodless pneumatic cylinder regarding to selected vibrational symptoms. On the basis of performed analysis the authors had shown meaningful premises allowing an evaluation of the performance and tuning of end position damping piston movement with usage the most common diagnostic tools (portable vibration analyzers). The presented method is useful in tuning of parameters in industrial conditions.
Study on vibration characteristic of the marine beveloid gear RV reducer
NASA Astrophysics Data System (ADS)
Wen, Jianmin; Cui, Haiyue; Yang, Tong
2018-05-01
The paper focuses on the vibration characteristic of the marine beveloid gear RV reducer and provides the theoretical guidance for vibration reduction. The cycloid gears are replaced by the beveloid gears in the transmission system. Considering the impact of the backlash, time-varying meshing stiffness and transmission error, a three-dimensional lumped parameter dynamic model of the marine beveloid gear RV reducer is established. The dynamic differential equations are solved through the 4th-5th order Runge-Kutta numerical integration method. By comparing the change of the time-displacement curves and amplitude curves, the impact of the external and internal excitation on the system vibration characteristic is investigated.
Analysis of impact craters on Mercury's surface.
NASA Astrophysics Data System (ADS)
Martellato, E.; Cremonese, G.; Marzari, F.; Massironi, M.; Capria, M. T.
The formation of a crater is a complex process, which can be analyzed with numerical simulations and/or observational methods. This work reports a preliminary analysis of some craters on Mercury, based on the Mariner 10 images. The physical and dynamical properties of the projectile may not derive from the knowledge of the crater alone, since the size of an impact crater depends on many parameters. We have calculated the diameter of the projectile using the scaling law of Schmidt and Housen (\\citep{SandM87}). It is performed for different projectile compositions and impact velocities, assuming an anorthositic composition of the surface. The melt volume production at the initial phases of the crater formation is also calculated by the experimental law proposed by O'Keefe and Ahrens (\\citep{OA82}), giving the ratio between melt and projectile mass.
On the impact of reducing global geophysical fluid model deformations in SLR data processing
NASA Astrophysics Data System (ADS)
Weigelt, Matthias; Thaller, Daniela
2016-04-01
Mass redistributions in the atmosphere, oceans and the continental hydrology cause elastic loading deformations of the Earth's crust and thus systematically influence Earth-bound observation systems such as VLBI, GNSS or SLR. Causing non-linear station variations, these loading deformations have a direct impact on the estimated station coordinates and an indirect impact on other parameters of global space-geodetic solutions, e.g. Earth orientation parameters, geocenter coordinates, satellite orbits or troposphere parameters. Generally, the impact can be mitigated by co-parameterisation or by reducing deformations derived from global geophysical fluid models. Here, we focus on the latter approach. A number of data sets modelling the (non-tidal) loading deformations are generated by various groups. They show regionally and locally significant differences and consequently the impact on the space-geodetic solutions heavily depends on the available network geometry. We present and discuss the differences between these models and choose SLR as the speace-geodetic technique of interest in order to discuss the impact of atmospheric, oceanic and hydrological loading on the parameters of space-geodetic solutions when correcting for the global geophysical fluid models at the observation level. Special emphasis is given to a consistent usage of models for geometric and gravimetric corrections during the data processing. We quantify the impact of the different deformation models on the station coordinates and discuss the improvement in the Earth orientation parameters and the geocenter motion. We also show that a significant reduction in the RMS of the station coordinates can be achieved depending on the model of choice.
Induced Hypothermia Does Not Harm Hemodynamics after Polytrauma: A Porcine Model
Mommsen, Philipp; Pfeifer, Roman; Mohr, Juliane; Ruchholtz, Steffen; Flohé, Sascha; Fröhlich, Matthias; Keibl, Claudia; Seekamp, Andreas; Witte, Ingo
2015-01-01
Background. The deterioration of hemodynamics instantly endangers the patients' life after polytrauma. As accidental hypothermia frequently occurs in polytrauma, therapeutic hypothermia still displays an ambivalent role as the impact on the cardiopulmonary function is not yet fully understood. Methods. We have previously established a porcine polytrauma model including blunt chest trauma, penetrating abdominal trauma, and hemorrhagic shock. Therapeutic hypothermia (34°C) was induced for 3 hours. We documented cardiovascular parameters and basic respiratory parameters. Pigs were euthanized after 15.5 hours. Results. Our polytrauma porcine model displayed sufficient trauma impact. Resuscitation showed adequate restoration of hemodynamics. Induced hypothermia had neither harmful nor major positive effects on the animals' hemodynamics. Though heart rate significantly decreased and mixed venous oxygen saturation significantly increased during therapeutic hypothermia. Mean arterial blood pressure, central venous pressure, pulmonary arterial pressure, and wedge pressure showed no significant differences comparing normothermic trauma and hypothermic trauma pigs during hypothermia. Conclusions. Induced hypothermia after polytrauma is feasible. No major harmful effects on hemodynamics were observed. Therapeutic hypothermia revealed hints for tissue protective impact. But the chosen length for therapeutic hypothermia was too short. Nevertheless, therapeutic hypothermia might be a useful tool for intensive care after polytrauma. Future studies should extend therapeutic hypothermia. PMID:26170533
Kinetic and kinematic analysis of stamping impacts during simulated rucking in rugby union.
Oudshoorn, Bodil Y; Driscoll, Heather F; Dunn, Marcus; James, David
2018-04-01
Laceration injuries account for up to 23% of injuries in rugby union. They are frequently caused by studded footwear as a result of a player stamping onto another player during the ruck. Little is known about the kinetics and kinematics of rugby stamping impacts; current test methods assessing laceration injury risk of stud designs therefore lack informed test parameters. In this study, twelve participants stamped on an anthropomorphic test device in a one-on-one simulated ruck setting. Velocity and inclination angle of the foot prior to impact was determined from high-speed video footage. Total stamping force and individual stud force were measured using pressure sensors. Mean foot inbound velocity was 4.3 m ∙ s -1 (range 2.1-6.3 m ∙ s -1 ). Mean peak total force was 1246 N and mean peak stud force was 214 N. The total mean effective mass during stamping was 6.6 kg (range: 1.6-13.5 kg) and stud effective mass was 1.2 kg (range: 0.5-2.9 kg). These results provide representative test parameters for mechanical test devices designed to assess laceration injury risk of studded footwear for rugby union.
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, H; Gordon, J; Chetty, I
2014-06-15
Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less
Generalised Pareto distribution: impact of rounding on parameter estimation
NASA Astrophysics Data System (ADS)
Pasarić, Z.; Cindrić, K.
2018-05-01
Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.
Estimation of Physical Parameters of a Multilayered Multi-Scale Vegetated Surface
NASA Astrophysics Data System (ADS)
Hosni, I.; Bennaceur Farah, L.; Naceur, M. S.; Farah, I. R.
2016-06-01
Soil moisture is important to enable the growth of vegetation in the way that it also conditions the development of plant population. Additionally, its assessment is important in hydrology and agronomy, and is a warning parameter for desertification. Furthermore, the soil moisture content affects exchanges with the atmosphere via the energy balance at the soil surface; it is significant due to its impact on soil evaporation and transpiration. Therefore, it conditions the energy transfer between Earth and atmosphere. Many remote sensing methods were tested. For the soil moisture; the first methods relied on the optical domain (short wavelengths). Obviously, due to atmospheric effects and the presence of clouds and vegetation cover, this approach is doomed to fail in most cases. Therefore, the presence of vegetation canopy complicates the retrieval of soil moisture because the canopy contains moisture of its own. This paper presents a synergistic methodology of SAR and optical remote sensing data, and it's for simulation of statistical parameters of soil from C-band radar measurements. Vegetation coverage, which can be easily estimated from optical data, was combined in the backscattering model. The total backscattering was divided into the amount attributed to areas covered with vegetation and that attributed to areas of bare soil. Backscattering coefficients were simulated using the established backscattering model. A two-dimensional multiscale SPM model has been employed to investigate the problem of electromagnetic scattering from an underlying soil. The water cloud model (WCM) is used to account for the effect of vegetation water content on radar backscatter data, whereof to eliminate the impact of vegetation layer and isolate the contributions of vegetation scattering and absorption from the total backscattering coefficient.
Psychosocial job stress and immunity: a systematic review.
Nakata, Akinori
2012-01-01
The purpose of this review was to provide current knowledge about the possible association between psychosocial job stress and immune parameters in blood, saliva, and urine. Using bibliographic databases (PubMed, PsychINFO, Web of Science, Medline) and the snowball method, 56 studies were found. In general, exposure to psychosocial job stress (high job demands, low job control, high job strain, job dissatisfaction, high effort-reward imbalance, overcommitment, burnout, unemployment, organizational downsizing, economic recession) had a measurable impact on immune parameters (reduced NK cell activity, NK and T cell subsets, CD4+/CD8+ ratio, and increased inflammatory markers). The evidence supports that psychosocial job stresses are related to disrupted immune responses but further research is needed to demonstrate cause-effect relationships.
Anisotropic metamaterial waveguide driven by a cold and relativistic electron beam
NASA Astrophysics Data System (ADS)
Torabi, Mahmoud; Shokri, Babak
2018-03-01
We study the interaction of a cold and relativistic electron beam with a cylindrical waveguide loaded by an anisotropic and dispersive metamaterial layer. The general dispersion relation for the transverse magnetic (TM) mode, through the linear fluid model and Maxwell equations decomposition method, is derived. The effects of some metamaterial parameters on dispersion relation are presented. A qualitative discussion shows the possibility of monomodal propagation band widening and obtaining more control on dispersion relation behavior. Especially for epsilon negative near zero metamaterials, these effects are considerable. Finally, the anisotropy and metamaterial layer thickness impacts on wave growth rate for different metamaterials are considered. The results demonstrate that we can control both wave growth rate and voltage of saturation peak by metamaterial parameters.
Influence of the freezing method on the changes that occur in grape samples after frozen storage.
Santesteban, Luis G; Miranda, Carlos; Royo, José B
2013-09-01
Sample freezing is frequently used in oenological laboratories as a compromise solution to increase the number of samples that can be analysed, despite the fact that some grape characteristics are known to change after frozen storage. However, freezing is usually performed using standard freezers, which provide a slow freezing. The aim of this work was to evaluate whether blast freezing would decrease the impact of standard freezing on grape composition. Grape quality parameters were assessed in fresh and in frozen stored samples that had been frozen using three different procedures: standard freezing and blast freezing using either a blast freezer or an ultra-freezer. The implications of frozen storage in grape samples reported in earlier research were observed for the three freezing methods evaluated. Although blast freezing improved repeatability for the most problematic parameters (tartaric acidity, TarA; total phenolics, TP), the improvement was not important from a practical point of view. However, TarA and TP were relatively repeatable among the three freezing procedures, which suggests that freezing had an effect on these parameters independently of the method used . According to our results, the salification potential of the must is probably implied in the changes observed for TarA, whereas for TP the precipitation of protoanthocyanins after association with cell wall material is hypothesized to cause the lack of repeatability between fresh and frozen grapes. Blast freezing would not imply a great improvement if implemented in oenological laboratories, at least for the parameters included in this study. © 2013 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Zeng, Baoping; Wang, Chao; Zhang, Yu; Gong, Yajun; Hu, Sanbao
2017-12-01
Joint clearances and friction characteristics significantly influence the mechanism vibration characteristics; for example: as for joint clearances, the shaft and bearing of its clearance joint collide to bring about the dynamic normal contact force and tangential coulomb friction force while the mechanism works; thus, the whole system may vibrate; moreover, the mechanism is under contact-impact with impact force constraint from free movement under action of the above dynamic forces; in addition, the mechanism topology structure also changes. The constraint relationship between joints may be established by a repeated complex nonlinear dynamic process (idle stroke - contact-impact - elastic compression - rebound - impact relief - idle stroke movement - contact-impact). Analysis of vibration characteristics of joint parts is still a challenging open task by far. The dynamic equations for any mechanism with clearance is often a set of strong coupling, high-dimensional and complex time-varying nonlinear differential equations which are solved very difficultly. Moreover, complicated chaotic motions very sensitive to initial values in impact and vibration due to clearance let high-precision simulation and prediction of their dynamic behaviors be more difficult; on the other hand, their subsequent wearing necessarily leads to some certain fluctuation of structure clearance parameters, which acts as one primary factor for vibration of the mechanical system. A dynamic model was established to the device for opening the deepwater robot cabin door with joint clearance by utilizing the finite element method and analysis was carried out to its vibration characteristics in this study. Moreover, its response model was carried out by utilizing the DOE method and then the robust optimization design was performed to sizes of the joint clearance and the friction coefficient change range so that the optimization design results may be regarded as reference data for selecting bearings and controlling manufacturing process parameters for the opening mechanism. Several optimization objectives such as x/y/z accelerations for various measuring points and dynamic reaction forces of mounting brackets, and a few constraints including manufacturing process were taken into account in the optimization models, which were solved by utilizing the multi-objective genetic algorithm (NSGA-II). The vibration characteristics of the optimized opening mechanism are superior to those of the original design. In addition, the numerical forecast results are in good agreement with the test results of the prototype.
NASA Astrophysics Data System (ADS)
Horton, Pascal; Weingartner, Rolf; Obled, Charles; Jaboyedoff, Michel
2017-04-01
Analogue methods (AMs) rely on the hypothesis that similar situations, in terms of atmospheric circulation, are likely to result in similar local or regional weather conditions. These methods consist of sampling a certain number of past situations, based on different synoptic-scale meteorological variables (predictors), in order to construct a probabilistic prediction for a local weather variable of interest (predictand). They are often used for daily precipitation prediction, either in the context of real-time forecasting, reconstruction of past weather conditions, or future climate impact studies. The relationship between predictors and predictands is defined by several parameters (predictor variable, spatial and temporal windows used for the comparison, analogy criteria, and number of analogues), which are often calibrated by means of a semi-automatic sequential procedure that has strong limitations. AMs may include several subsampling levels (e.g. first sorting a set of analogs in terms of circulation, then restricting to those with similar moisture status). The parameter space of the AMs can be very complex, with substantial co-dependencies between the parameters. Thus, global optimization techniques are likely to be necessary for calibrating most AM variants, as they can optimize all parameters of all analogy levels simultaneously. Genetic algorithms (GAs) were found to be successful in finding optimal values of AM parameters. They allow taking into account parameters inter-dependencies, and selecting objectively some parameters that were manually selected beforehand (such as the pressure levels and the temporal windows of the predictor variables), and thus obviate the need of assessing a high number of combinations. The performance scores of the optimized methods increased compared to reference methods, and this even to a greater extent for days with high precipitation totals. The resulting parameters were found to be relevant and spatially coherent. Moreover, they were obtained automatically and objectively, which reduces efforts invested in exploration attempts when adapting the method to a new region or for a new predictand. In addition, the approach allowed for new degrees of freedom, such as a weighting between the pressure levels, and non overlapping spatial windows. Genetic algorithms were then used further in order to automatically select predictor variables and analogy criteria. This resulted in interesting outputs, providing new predictor-criterion combinations. However, some limitations of the approach were encountered, and the need of the expert input is likely to remain necessary. Nevertheless, letting GAs exploring a dataset for the best predictor for a predictand of interest is certainly a useful tool, particularly when applied for a new predictand or a new region under different climatic characteristics.
Impacts of climate change on surface water quality in relation to drinking water production.
Delpla, I; Jung, A-V; Baures, E; Clement, M; Thomas, O
2009-11-01
Besides climate change impacts on water availability and hydrological risks, the consequences on water quality is just beginning to be studied. This review aims at proposing a synthesis of the most recent existing interdisciplinary literature on the topic. After a short presentation about the role of the main factors (warming and consequences of extreme events) explaining climate change effects on water quality, the focus will be on two main points. First, the impacts on water quality of resources (rivers and lakes) modifying parameters values (physico-chemical parameters, micropollutants and biological parameters) are considered. Then, the expected impacts on drinking water production and quality of supplied water are discussed. The main conclusion which can be drawn is that a degradation trend of drinking water quality in the context of climate change leads to an increase of at risk situations related to potential health impact.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry
Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less
Microwave NDE of impact damaged fiberglass and elastomer layered composites
NASA Astrophysics Data System (ADS)
Greenawald, E. C.; Levenberry, L. J.; Qaddoumi, N.; McHardy, A.; Zoughi, R.; Poranski, C. F.
2000-05-01
Layered composites have been proposed as advanced materials for future use in large naval sonar domes. Unlike today's steel/rubber composite domes, such materials promise engineered acoustic properties and less costly resin-transfer fabrication methods. The development and deployment of these large and complex composite structures will result in challenging NDE requirements for both manufacturing quality assurance and in-service needs. Among the anticipated in-service requirements is the detection and characterization of the impact damage associated with striking a submerged object at sea. A one-sided inspection method is desired, preferably applicable in the underwater environment. In this paper, we present preliminary microwave NDE results from impact test coupons of a proposed thick FRP/elastomer/FRP "sandwich" composite. The coupons were scanned using a near-field microwave probe that responds to the composite's dielectric properties. The unprocessed scan data was displayed in an image format to reveal damaged areas. Results are compared with those from x-ray backscatter imaging and ultrasonic testing, and are verified by destructive analysis of the coupons. The difficulties posed by the application are discussed, as are the operating principles and advantages of the microwave methods. The importance of optimizing inspection parameters such as frequency and standoff distance is emphasized for future work.
Li, Jinshan
2010-02-15
The ZPE-corrected N-NO(2) bond dissociation energies (BDEs(ZPE)) of a series of model N-nitrocompounds and typical energetic N-nitrocompounds have been calculated using density functional theory methods. Computed results show that using the 6-31G** basis set the UB3LYP calculated BDE(ZPE) is similar to the B3PW91 but is less than the UB3P86 and that for both UB3P86 and UB3PW91 methods the 6-31G(**) calculated BDE(ZPE) is close to the 6-31++G(**). For the series of model N-nitrocompounds it is drawn from the NBO analysis that at the UB3LYP/6-31G(**) level the order of BDE(ZPE) is not only in line with that of bond order but also with that of the energy gap between N-NO(2) bond and antibond orbitals. For the typical energetic N-nitrocompounds the impact sensitivity is strongly related to the BDE(ZPE) indeed, and based on the BDEs(ZPE) calculated at different density functional theory levels this work has established a good multivariate correlation of impact sensitivity with molecular parameters, which provides a method to address the sensitivity problem.
2010-01-01
Background Histologic samples all funnel through the H&E microtomy staining area. Here manual processes intersect with semi-automated processes creating a bottleneck. We compare alternate work processes in anatomic pathology primarily in the H&E staining work cell. Methods We established a baseline measure of H&E process impact on personnel, information management and sample flow from historical workload and production data and direct observation. We compared this to performance after implementing initial Lean process modifications, including workstation reorganization, equipment relocation and workflow levelling, and the Ventana Symphony stainer to assess the impact on productivity in the H&E staining work cell. Results Average time from gross station to assembled case decreased by 2.9 hours (12%). Total process turnaround time (TAT) exclusive of processor schedule changes decreased 48 minutes/case (4%). Mean quarterly productivity increased 8.5% with the new methods. Process redesign reduced the number of manual steps from 219 to 182, a 17% reduction. Specimen travel distance was reduced from 773 ft/case to 395 ft/case (49%) overall, and from 92 to 53 ft/case in the H&E cell (42% improvement). Conclusions Implementation of Lean methods in the H&E work cell of histology can result in improved productivity, improved through-put and case availability parameters including TAT. PMID:20181123
Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China.
Shi, Benyun; Tan, Qi; Zhou, Xiao-Nong; Liu, Jiming
2015-05-27
Geographic variations of an infectious disease characterize the spatial differentiation of disease incidences caused by various impact factors, such as environmental, demographic, and socioeconomic factors. Some factors may directly determine the force of infection of the disease (namely, explicit factors), while many other factors may indirectly affect the number of disease incidences via certain unmeasurable processes (namely, implicit factors). In this study, the impact of heterogeneous factors on geographic variations of Plasmodium vivax incidences is systematically investigate in Tengchong, Yunnan province, China. A space-time model that resembles a P. vivax transmission model and a hidden time-dependent process, is presented by taking into consideration both explicit and implicit factors. Specifically, the transmission model is built upon relevant demographic, environmental, and biophysical factors to describe the local infections of P. vivax. While the hidden time-dependent process is assessed by several socioeconomic factors to account for the imported cases of P. vivax. To quantitatively assess the impact of heterogeneous factors on geographic variations of P. vivax infections, a Markov chain Monte Carlo (MCMC) simulation method is developed to estimate the model parameters by fitting the space-time model to the reported spatial-temporal disease incidences. Since there is no ground-truth information available, the performance of the MCMC method is first evaluated against a synthetic dataset. The results show that the model parameters can be well estimated using the proposed MCMC method. Then, the proposed model is applied to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Based on the geographic variations, the 18 towns can be further classify into five groups with similar socioeconomic causality for P. vivax incidences. Although this study focuses mainly on the transmission of P. vivax, the proposed space-time model is general and can readily be extended to investigate geographic variations of other diseases. Practically, such a computational model will offer new insights into active surveillance and strategic planning for disease surveillance and control.
Dell'Atti, Lucio; Russo, Gian Rosario
2011-01-01
The process of organizing a ultrasound service nowadays can be improved by properly managing the user's request, the speed of response and safety, the standardization of methods and skills. The outpatients at our uro-andrologic ultrasound clinic (O.U. of Urology in Ferrara) received a questionnaire each; we administered a total of 640 questionnaires. The number of questionnaires collected was 532. Patients were asked to give an assessment of services using a qualitative method according to a 4-parameter response scale: very satisfied, satisfied, dissatisfied, very dissatisfied. The identification of indicators to be monitored by the user's perceived quality was accomplished by establishing the correlation coefficient between different parameters of analysis and an overall rating of the sample. Some of these parameters were: the relationship with the practitioner, the availability of doctors, the ability of doctors for reassurance, the completeness of information and facilities hygiene conditions. When these parameters vary, positively or negatively, also the citizen's overall opinion changes. The customer satisfaction is an important component of the quality of care, it represents both an indicator of the effectiveness of health intervention and the ability to meet quality requirements of the health service organization. The objective of an ultrasound service should be to provide, within a reasonable timeframe, the supply of high quality with qualified personnel, with adequate tools and procedures.
An application of deep learning in the analysis of stellar spectra
NASA Astrophysics Data System (ADS)
Fabbro, S.; Venn, K. A.; O'Briain, T.; Bialek, S.; Kielty, C. L.; Jahandar, F.; Monty, S.
2018-04-01
Spectroscopic surveys require fast and efficient analysis methods to maximize their scientific impact. Here, we apply a deep neural network architecture to analyse both SDSS-III APOGEE DR13 and synthetic stellar spectra. When our convolutional neural network model (StarNet) is trained on APOGEE spectra, we show that the stellar parameters (temperature, gravity, and metallicity) are determined with similar precision and accuracy as the APOGEE pipeline. StarNet can also predict stellar parameters when trained on synthetic data, with excellent precision and accuracy for both APOGEE data and synthetic data, over a wide range of signal-to-noise ratios. In addition, the statistical uncertainties in the stellar parameter determinations are comparable to the differences between the APOGEE pipeline results and those determined independently from optical spectra. We compare StarNet to other data-driven methods; for example, StarNet and the Cannon 2 show similar behaviour when trained with the same data sets; however, StarNet performs poorly on small training sets like those used by the original Cannon. The influence of the spectral features on the stellar parameters is examined via partial derivatives of the StarNet model results with respect to the input spectra. While StarNet was developed using the APOGEE observed spectra and corresponding ASSET synthetic data, we suggest that this technique is applicable to other wavelength ranges and other spectral surveys.
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
Estimation of actual evapotranspiration in the Nagqu river basin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zou, Mijun; Zhong, Lei; Ma, Yaoming; Hu, Yuanyuan; Feng, Lu
2018-05-01
As a critical component of the energy and water cycle, terrestrial actual evapotranspiration (ET) can be influenced by many factors. This study was mainly devoted to providing accurate and continuous estimations of actual ET for the Tibetan Plateau (TP) and analyzing the effects of its impact factors. In this study, summer observational data from the Coordinated Enhanced Observing Period (CEOP) Asia-Australia Monsoon Project (CAMP) on the Tibetan Plateau (CAMP/Tibet) for 2003 to 2004 was selected to determine actual ET and investigate its relationship with energy, hydrological, and dynamical parameters. Multiple-layer air temperature, relative humidity, net radiation flux, wind speed, precipitation, and soil moisture were used to estimate actual ET. The regression model simulation results were validated with independent data retrieved using the combinatory method. The results suggested that significant correlations exist between actual ET and hydro-meteorological parameters in the surface layer of the Nagqu river basin, among which the most important factors are energy-related elements (net radiation flux and air temperature). The results also suggested that how ET is eventually affected by precipitation and two-layer wind speed difference depends on whether their positive or negative feedback processes have a more important role. The multivariate linear regression method provided reliable estimations of actual ET; thus, 6-parameter simplified schemes and 14-parameter regular schemes were established.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Characterization and modeling of a highly-oriented thin film for composite forming
NASA Astrophysics Data System (ADS)
White, K. D.; Sherwood, J. A.
2018-05-01
Ultra High Molecular Weight Polyethylene (UHMWPE) materials exhibit high impact strength, excellent abrasion resistance and high chemical resistance, making them attractive for a number of impact applications for automotive, marine and medical industries. One format of this class of materials that is being considered for the thermoforming process is a highly-oriented extruded thin film. Parts are made using a two-step manufacturing process that involves first producing a set of preforms and then consolidating these preforms into a final shaped part. To assist in the design of the processing parameters, simulations of the preforming and compression molding steps can be completed using the finite element method. Such simulations require material input data as developed through a comprehensive characterization test program, e.g. shear, tensile and bending, over the range of potential processing temperatures. The current research investigates the challenges associated with the characterization of thin, highly-oriented UHMWPE films. Variations in grip type, sample size and testing rates are explored to achieve convergence of the characterization data. Material characterization results are then used in finite element simulations of the tension test to explore element formulations that work well with the mechanical behavior. Comparisons of the results from the material characterization tests to results of simulations of the same test are performed to validate the finite element method parameters and the credibility of the user-defined material model.
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
SAChES: Scalable Adaptive Chain-Ensemble Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less
Data for factor analysis of hydro-geochemical characteristics of groundwater resources in Iranshahr.
Biglari, Hamed; Saeidi, Mehdi; Karimyan, Kamaleddin; Narooie, Mohammad Reza; Sharafi, Hooshmand
2018-08-01
Detection of Hydrogeological and Hydro-geochemical changes affecting the quality of aquifer water is very important. The aim of this study was to determine the factor analysis of the hydro-geochemical characteristics of Iranshahr underground water resources during the warm and cool seasons. In this study, 248 samples (two-time repetitions) of ground water resources were provided at first by cluster-random sampling method during 2017 in the villages of Iranshahr city. After transferring the samples to the laboratory, concentrations of 13 important chemical parameters in those samples were determined according to o water and wastewater standard methods. The results of this study indicated that 45.45% and 55.55% of the correlation between parameters has had a significant decrease and increase, respectively with the transition from warm seasons to cold seasons. According to the factor analysis method, three factors of land hydro-geochemical processes, supplying resources by surface water and sewage as well as human activities have been identified as influential on the chemical composition of these resources.The highest growth rate of 0.37 was observed between phosphate and nitrate ions while the lowest trend of - 0.33 was seen between fluoride ion and calcium as well as chloride ions. Also, a significant increase in the correlation between magnesium ion and nitrate ion from warm seasons to cold seasons indicates the high seasonal impact of the relation between these two parameters.
Pallagi, Edina; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó
2015-08-01
Regulatory science based pharmaceutical development and product manufacturing is highly recommended by the authorities nowadays. The aim of this study was to adapt regulatory science even in the nano-pharmaceutical early development. Authors applied the quality by design (QbD) concept in the early development phase of nano-systems, where the illustration material was meloxicam. The meloxicam nanoparticles produced by co-grinding method for nasal administration were studied according to the QbD policy and the QbD based risk assessment (RA) was performed. The steps were implemented according to the relevant regulatory guidelines (quality target product profile (QTPP) determination, selection of critical quality attributes (CQAs) and critical process parameters (CPPs)) and a special software (Lean QbD Software(®)) was used for the RA, which represents a novelty in this field. The RA was able to predict and identify theoretically the factors (e.g. sample composition, production method parameters, etc.) which have the highest impact on the desired meloxicam-product quality. The results of the practical research justified the theoretical prediction. This method can improve pharmaceutical nano-developments by achieving shorter development time, lower cost, saving human resource efforts and more effective target-orientation. It makes possible focusing the resources on the selected parameters and area during the practical product development. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of a shock wave adhesion test for composite bonds by laser pulsed and mechanical impacts
NASA Astrophysics Data System (ADS)
Ecault, Romain; Boustie, Michel; Touchard, Fabienne; Arrigoni, Michel; Berthe, Laurent; CNRS Collaboration
2013-06-01
Evaluating the bonding quality of composite material is becoming one of the main challenges faced by aeronautic industries. This work aims the development of a technique using shock wave, which would enable to quantify the bonding mechanical quality. Laser shock experiments were carried out. This technique enables high tensile stress generation in the thickness of composite bond without any mechanical contact. The resulting damage has been quantified using different method such as confocal microscopy, ultrasound and cross section observation. The discrimination between a correct bond and a weak bond was possible thanks to these experiments. Nevertheless, laser sources are not well adapted for optimization of such a test since it has often fixed parameters. That is why mechanical impacts bonded composites were also performed in this work. By changing the thickness of aluminum projectiles, the tensile stresses generated by the shock wave propagation were moved toward the composite/bond interface. The observations made prove that the optimization of the technique is possible. The key parameters for the development of a bonding test using shock wave have been identified.
Shock-induced damage in rocks: Application to impact cratering
NASA Astrophysics Data System (ADS)
Ai, Huirong
Shock-induced damage beneath impact craters is studied in this work. Two representative terrestrial rocks, San Marcos granite and Bedford limestone, are chosen as test target. Impacts into the rock targets with different combinations of projectile material, size, impact angle, and impact velocity are carried out at cm scale in the laboratory. Shock-induced damage and fracturing would cause large-scale compressional wave velocity reduction in the recovered target beneath the impact crater. The shock-induced damage is measured by mapping the compressional wave velocity reduction in the recovered target. A cm scale nondestructive tomography technique is developed for this purpose. This technique is proved to be effective in mapping the damage in San Marcos granite, and the inverted velocity profile is in very good agreement with the result from dicing method and cut open directly. Both compressional velocity and attenuation are measured in three orthogonal directions on cubes prepared from one granite target impacted by a lead bullet at 1200 m/s. Anisotropy is observed from both results, but the attenuation seems to be a more useful parameter than acoustic velocity in studying orientation of cracks. Our experiments indicate that the shock-induced damage is a function of impact conditions including projectile type and size, impact velocity, and target properties. Combined with other crater phenomena such as crater diameter, depth, ejecta, etc., shock-induced damage would be used as an important yet not well recognized constraint for impact history. The shock-induced damage is also calculated numerically to be compared with the experiments for a few representative shots. The Johnson-Holmquist strength and failure model, initially developed for ceramics, is applied to geological materials. Strength is a complicated function of pressure, strain, strain rate, and damage. The JH model, coupled with a crack softening model, is used to describe both the inelastic response of rocks in the compressive field near the impact source and the tensile failure in the far field. The model parameters are determined either from direct static measurements, or from indirect numerical adjustment. The agreement between the simulation and experiment is very encouraging.
Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert
2011-12-01
General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.
2013-01-01
Objectives In contrast to other countries, surgery still represents the common invasive treatment for varicose veins in Germany. However, radiofrequency ablation, e.g. ClosureFast, becomes more and more popular in other countries due to potential better results and reduced side effects. This treatment option may cause less follow-up costs and is a more convenient procedure for patients, which could justify an introduction in the statutory benefits catalogue. Therefore, we aim at calculating the budget impact of a general reimbursement of ClosureFast in Germany. Methods To assess the budget impact of including ClosureFast in the German statutory benefits catalogue, we developed a multi-cohort Markov model and compared the costs of a “World with ClosureFast” with a “World without ClosureFast” over a time horizon of five years. To address the uncertainty of input parameters, we conducted three different types of sensitivity analysis (one-way, scenario, probabilistic). Results In the Base Case scenario, the introduction of the ClosureFast system for the treatment of varicose veins saves costs of about 19.1 Mio. € over a time horizon of five years in Germany. However, the results scatter in the sensitivity analyses due to limited evidence of some key input parameters. Conclusions Results of the budget impact analysis indicate that a general reimbursement of ClosureFast has the potential to be cost-saving in the German Statutory Health Insurance. PMID:23551943
Electron-impact coherence parameters for 41 P 1 excitation of zinc
NASA Astrophysics Data System (ADS)
Piwiński, Mariusz; Kłosowski, Łukasz; Chwirot, Stanisław; Fursa, Dmitry V.; Bray, Igor; Das, Tapasi; Srivastava, Rajesh
2018-04-01
We present electron-impact coherence parameters (EICP) for electron-impact excitation of 41 P 1 state of zinc atoms for collision energies 40 eV and 60 eV. The experimental results are presented together with convergent close-coupling and relativistic distorted-wave approximation theoretical predictions. The results are compared and discussed with EICP data for collision energies 80 eV and 100 eV.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Identifying Aquifer Heterogeneities using the Level Set Method
NASA Astrophysics Data System (ADS)
Lu, Z.; Vesselinov, V. V.; Lei, H.
2016-12-01
Material interfaces between hydrostatigraphic units (HSU) with contrasting aquifer parameters (e.g., strata and facies with different hydraulic conductivity) have a great impact on flow and contaminant transport in subsurface. However, the identification of HSU shape in the subsurface is challenging and typically relies on tomographic approaches where a series of steady-state/transient head measurements at spatially distributed observation locations are analyzed using inverse models. In this study, we developed a mathematically rigorous approach for identifying material interfaces among any arbitrary number of HSUs using the level set method. The approach has been tested first with several synthetic cases, where the true spatial distribution of HSUs was assumed to be known and the head measurements were taken from the flow simulation with the true parameter fields. These synthetic inversion examples demonstrate that the level set method is capable of characterizing the spatial distribution of the heterogeneous. We then applied the methodology to a large-scale problem in which the spatial distribution of pumping wells and observation well screens is consistent with the actual aquifer contamination (chromium) site at the Los Alamos National Laboratory (LANL). In this way, we test the applicability of the methodology at an actual site. We also present preliminary results using the actual LANL site data. We also investigated the impact of the number of pumping/observation wells and the drawdown observation frequencies/intervals on the quality of the inversion results. We also examined the uncertainties associated with the estimated HSU shapes, and the accuracy of the results under different hydraulic-conductivity contrasts between the HSU's.
Pribuisiene, Ruta; Uloza, Virgilijus; Kardisiene, Vilija
2011-12-01
To determine impact of age, gender, and vocal training on voice characteristics of children aged 6-13 years. Voice acoustic and phonetogram parameters were determined for the group of 44 singing and 31 non-singing children. No impact of gender and/or age on phonetogram, acoustic voice parameters, and maximum phonation time was detected. Voice ranges of all children represented a pre-pubertal soprano type with a voice range of 22 semitones for non-singing and of 26 semitones for singing individuals. The mean maximum voice intensity was 81 dB. Vocal training had a positive impact on voice intensity parameters in girls. The presented data on average voice characteristics may be applicable in the clinical practice and provide relevant support for voice assessment.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
NASA Astrophysics Data System (ADS)
Lam, Carl
Due to technology proliferation, the environmental burden attributed to the production, use, and disposal of hazardous materials in electronics have become a worldwide concern. The major theme of this dissertation is to develop and apply hazardous materials assessment tools to systematically guide pollution prevention opportunities in the context of electronic product design, manufacturing and end-of-life waste management. To this extent, a comprehensive review is first provided on describing hazard traits and current assessment methods to evaluate hazardous materials. As a case study at the manufacturing level, life cycle impact assessment (LCIA)-based and risk-based screening methods are used to quantify chemical and geographic environmental impacts in the U.S. printed wiring board (PWB) industry. Results from this industrial assessment clarify priority waste streams and States to most effectively mitigate impact. With further knowledge of PWB manufacturing processes, select alternative chemical processes (e.g., spent copper etchant recovery) and material options (e.g., lead-free etch resist) are discussed. In addition, an investigation on technology transition effects for computers and televisions in the U.S. market is performed by linking dynamic materials flow and environmental assessment models. The analysis forecasts quantities of waste units generated and maps shifts in environmental impact potentials associated with metal composition changes due to product substitutions. This insight is important to understand the timing and waste quantities expected and the emerging toxic elements needed to be addressed as a consequence of technology transition. At the product level, electronic utility meter devices are evaluated to eliminate hazardous materials within product components. Development and application of a component Toxic Potential Indicator (TPI) assessment methodology highlights priority components requiring material alternatives. Alternative recommendations are provided and substitute materials such as aluminum alloys for stainless steel and high-density polyethylene for polyvinyl chloride and acrylonitrile-based polymers show promise to meet toxicity reduction, cost, and material functionality requirements. Furthermore, the TPI method, an European Union focused screening tool, is customized to reflect regulated U.S. toxicity parameters. Results show that, although it is possible to adopt U.S. parameters into the TPI method, harmonization of toxicity regulation and standards in various nations and regions is necessary to eliminate inconsistencies during hazard screening of substances used globally. As a whole, the present work helps to assimilate material hazard assessment methods into the larger framework of design for environment strategies so toxics use reduction could be achieved for the development and management of electronics and other consumer goods.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Keeney, Brian A.; Stocke, John T.; Danforth, Charles W.; Shull, J. Michael; Pratt, Cameron T.; Froning, Cynthia S.; Green, James C.; Penton, Steven V.; Savage, Blair D.
2017-05-01
We present basic data and modeling for a survey of the cool, photoionized circumgalactic medium (CGM) of low-redshift galaxies using far-UV QSO absorption-line probes. This survey consists of “targeted” and “serendipitous” CGM subsamples, originally described in Stocke et al. (Paper I). The targeted subsample probes low-luminosity, late-type galaxies at z< 0.02 with small impact parameters (< ρ > =71 kpc), and the serendipitous subsample probes higher luminosity galaxies at z≲ 0.2 with larger impact parameters (< ρ > =222 kpc). Hubble Space Telescope and FUSE UV spectroscopy of the absorbers and basic data for the associated galaxies, derived from ground-based imaging and spectroscopy, are presented. We find broad agreement with the COS-Halos results, but our sample shows no evidence for changing ionization parameter or hydrogen density with distance from the CGM host galaxy, probably because the COS-Halos survey probes the CGM at smaller impact parameters. We find at least two passive galaxies with H I and metal-line absorption, confirming the intriguing COS-Halos result that galaxies sometimes have cool gas halos despite no on-going star formation. Using a new methodology for fitting H I absorption complexes, we confirm the CGM cool gas mass of Paper I, but this value is significantly smaller than that found by the COS-Halos survey. We trace much of this difference to the specific values of the low-z metagalactic ionization rate assumed. After accounting for this difference, a best-value for the CGM cool gas mass is found by combining the results of both surveys to obtain {log}(M/{M}⊙ )=10.5+/- 0.3, or ˜30% of the total baryon reservoir of an L≥slant {L}* , star-forming galaxy. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.
Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik
2015-10-01
The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.
Nanoparticle generation and interactions with surfaces in vacuum systems
NASA Astrophysics Data System (ADS)
Khopkar, Yashdeep
Extreme ultraviolet lithography (EUVL) is the most likely candidate as the next generation technology beyond immersion lithography to be used in high volume manufacturing in the semiconductor industry. One of the most problematic areas in the development process is the fabrication of mask blanks used in EUVL. As the masks are reflective, there is a chance that any surface aberrations in the form of bumps or pits could be printed on the silicon wafers. There is a strict tolerance to the number density of such defects on the mask that can be used in the final printing process. Bumps on the surface could be formed when particles land on the mask blank surface during the deposition of multiple bi-layers of molybdenum and silicon. To identify, and possibly mitigate the source of particles during mask fabrication, SEMATECH investigated particle generation in the VEECO Nexus deposition tool. They found several sources of particles inside the tool such as valves. To quantify the particle generation from vacuum components, a test bench suitable for evaluating particle generation in the sub-100 nm particle size range was needed. The Nanoparticle test bench at SUNY Polytechnic Institute was developed as a sub-set of the overall SEMATECH suite of metrology tools used to identify and quantify sources of particles inside process tools that utilize these components in the semiconductor industry. Vacuum valves were tested using the test bench to investigate the number, size and possible sources of particles inside the valves. Ideal parameters of valve operation were also investigated using a 300-mm slit valve with the end goal of finding optimized parameters for minimum particle generation. SEMATECH also pursued the development of theoretical models of particle transport replicating the expected conditions in an ion beam deposition chamber assuming that the particles were generated. In the case of the ion beam deposition tool used in the mask blank fabrication process, the ion beam in the tool could significantly accelerate particles. Assuming that these particles are transported to various surfaces inside the deposition tool, the next challenge is to enhance the adhesion of the particles on surfaces that are located in the non-critical areas inside the tool. However, for particles in the sub-100 nm size range, suitable methods do not exist that can compare the adhesion probability of particles upon impact for a wide range of impact velocities, surfaces and particle types. Traditional methods, which rely on optical measurement of particle velocities in the micron-size regime, cannot be used for sub-100 nm particles as the particles do not scatter sufficient light for the detectors to function. All the current methods rely on electrical measurements taken from impacting particles onto a surface. However, for sub-100 nm particles, the impact velocity varies in different regions of the same impaction spot. Therefore, electrical measurements are inadequate to quantify the exact adhesion characteristics at different impact velocities to enable a comparison of multiple particle-surface systems. Therefore, we propose a new method based on the use of scanning electron microscopy (SEM) imaging to study the adhesion of particles upon impact on surfaces. The use of SEM imaging allows for single particle detection across a single impaction spot and, therefore, enables the comparison of different regions with different impact velocities in a single impaction spot. The proposed method will provide comprehensive correlation between the adhesion probability of sub-100 nm particles and a wide range of impact velocities and angles. The location of each particle is compared with impact velocity predicted by using computational fluid dynamics methods to generate a comprehensive adhesion map involving the impact of 70 nm particles on a polished surface across a large impact velocity range. The final adhesion probability map shows higher adhesion at oblique impact angles compared to normal incidence impacts. Theoretical and experiments with micron-sized particles have shown that the contact area between the particle and the surface decreases at lower incidence angles which results in a decrease in the adhesion probability of the particle. The most likely cause of this result was the role of plastic deformation of particles and its effect on adhesion. Therefore, 70 nm sucrose particles were also impacted under similar impaction conditions to compare the role of plastic deformation on the adhesion characteristics of a particle. Sucrose particles have approximately 10 times more modulus of elasticity than Polystyrene Latex (PSL) particles and were found to have almost no adhesion on the surface at the same impact velocities where the highest adhesion of PSL particles was measured. Besides the role of plastic deformation, the influence of other possible errors in this process was investigated but not found to be significant. (Abstract shortened by UMI.).