Sample records for physically based distributed-parameter

  1. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  2. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  3. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  4. A physically based catchment partitioning method for hydrological analysis

    NASA Astrophysics Data System (ADS)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  5. Lumped versus distributed thermoregulatory control: results from a three-dimensional dynamic model.

    PubMed

    Werner, J; Buse, M; Foegen, A

    1989-01-01

    In this study we use a three-dimensional model of the human thermal system, the spatial grid of which is 0.5 ... 1.0 cm. The model is based on well-known physical heat-transfer equations, and all parameters of the passive system have definite physical values. According to the number of substantially different areas and organs, 54 spatially different values are attributed to each physical parameter. Compatibility of simulation and experiment was achieved solely on the basis of physical considerations and physiological basic data. The equations were solved using a modification of the alternating direction implicit method. On the basis of this complex description of the passive system close to reality, various lumped and distributed parameter control equations were tested for control of metabolic heat production, blood flow and sweat production. The simplest control equations delivering results on closed-loop control compatible with experimental evidence were determined. It was concluded that it is essential to take into account the spatial distribution of heat production, blood flow and sweat production, and that at least for control of shivering, distributed controller gains different from the pattern of distribution of muscle tissue are required. For sweat production this is not so obvious, so that for simulation of sweating control after homogeneous heat load a lumped parameter control may be justified. Based on these conclusions three-dimensional temperature profiles for cold and heat load and the dynamics for changes of the environmental conditions were computed. In view of the exact simulation of the passive system and the compatibility with experimentally attainable variables there is good evidence that those values extrapolated by the simulation are adequately determined. The model may be used both for further analysis of the real thermoregulatory mechanisms and for special applications in environmental and clinical health care.

  6. Study of Parameters And Methods of LL-Ⅳ Distributed Hydrological Model in DMIP2

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, J.; Wang, X.; Yang, C.; Zhao, Y.; Zhou, H.

    2008-05-01

    : The Physics-based distributed hydrological model is considered as an important developing period from the traditional experience-hydrology to the physical hydrology. The Hydrology Laboratory of the NOAA National Weather Service proposes the first and second phase of the Distributed Model Intercomparison Project (DMIP),that it is a great epoch-making work. LL distributed hydrological model has been developed to the fourth generation since it was established in 1997 on the Fengman-I district reservoir area (11000 km2).The LL-I distributed hydrological model was born with the applications of flood control system in the Fengman-I in China. LL-II was developed under the DMIP-I support, it is combined with GIS, RS, GPS, radar rainfall measurement.LL-III was established along with Applications of LL Distributed Model on Water Resources which was supported by the 973-projects of The Ministry of Science and Technology of the People's Republic of China. LL-Ⅳ was developed to face China's water problem. Combined with Blue River and the Baron Fork River basin of DMIP-II, the convection-diffusion equation of non-saturated and saturated seepage was derived from the soil water dynamics and continuous equation. In view of the technical characteristics of the model, the advantage of using convection-diffusion equation to compute confluence overall is longer period of predictable, saving memory space, fast budgeting, clear physical concepts, etc. The determination of parameters of hydrological model is the key, including experience coefficients and parameters of physical parameters. There are methods of experience, inversion, and the optimization to determine the model parameters, and each has advantages and disadvantages. This paper briefly introduces the LL-Ⅳ distribution hydrological model equations, and particularly introduces methods of parameters determination and simulation results on Blue River and Baron Fork River basin for DMIP-II. The soil moisture diffusion coefficient and coefficient of hydraulic conductivity are involved all through the LL-Ⅳ distribution of runoff and slope convergence model, used mainly empirical formula to determine. It's used optimization methods to calculate the two parameters of evaporation capacity (coefficient of bare land and vegetation land), two parameters of interception and wave velocity of Overland Flow, interflow and groundwater. The approach of determining wave velocity of River Network confluence and diffusion coefficient is: 1. Estimate roughness based mainly on digital information such as land use, soil texture, etc. 2.Establish the empirical formula. Another method is called convection-diffusion numerical inversion.

  7. Double density dynamics: realizing a joint distribution of a physical system and a parameter system

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Moritsugu, Kei

    2015-11-01

    To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.

  8. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    NASA Astrophysics Data System (ADS)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  9. Distributed Optical Fiber Sensors Based on Optical Frequency Domain Reflectometry: A review

    PubMed Central

    Wang, Chenhuan; Liu, Kun; Jiang, Junfeng; Yang, Di; Pan, Guanyi; Pu, Zelin; Liu, Tiegen

    2018-01-01

    Distributed optical fiber sensors (DOFS) offer unprecedented features, the most unique one of which is the ability of monitoring variations of the physical and chemical parameters with spatial continuity along the fiber. Among all these distributed sensing techniques, optical frequency domain reflectometry (OFDR) has been given tremendous attention because of its high spatial resolution and large dynamic range. In addition, DOFS based on OFDR have been used to sense many parameters. In this review, we will survey the key technologies for improving sensing range, spatial resolution and sensing performance in DOFS based on OFDR. We also introduce the sensing mechanisms and the applications of DOFS based on OFDR including strain, stress, vibration, temperature, 3D shape, flow, refractive index, magnetic field, radiation, gas and so on. PMID:29614024

  10. Distributed Optical Fiber Sensors Based on Optical Frequency Domain Reflectometry: A review.

    PubMed

    Ding, Zhenyang; Wang, Chenhuan; Liu, Kun; Jiang, Junfeng; Yang, Di; Pan, Guanyi; Pu, Zelin; Liu, Tiegen

    2018-04-03

    Distributed optical fiber sensors (DOFS) offer unprecedented features, the most unique one of which is the ability of monitoring variations of the physical and chemical parameters with spatial continuity along the fiber. Among all these distributed sensing techniques, optical frequency domain reflectometry (OFDR) has been given tremendous attention because of its high spatial resolution and large dynamic range. In addition, DOFS based on OFDR have been used to sense many parameters. In this review, we will survey the key technologies for improving sensing range, spatial resolution and sensing performance in DOFS based on OFDR. We also introduce the sensing mechanisms and the applications of DOFS based on OFDR including strain, stress, vibration, temperature, 3D shape, flow, refractive index, magnetic field, radiation, gas and so on.

  11. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  12. BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Han, Yunkun; Han, Zhanwen

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.

  13. BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yunkun; Han, Zhanwen, E-mail: hanyk@ynao.ac.cn, E-mail: zhanwenhan@ynao.ac.cn

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has beenmore » performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.« less

  14. Integrating machine learning to achieve an automatic parameter prediction for practical continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua

    2018-02-01

    For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.

  15. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    NASA Astrophysics Data System (ADS)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  16. Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.

    2014-03-01

    Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.

  17. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications

    PubMed Central

    2017-01-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications. PMID:29104259

  18. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications.

    PubMed

    Miah, Khalid; Potter, David K

    2017-11-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications.

  19. Multi-Parameter Scattering Sensor and Methods

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S. (Inventor); Fischer, David G. (Inventor)

    2016-01-01

    Methods, detectors and systems detect particles and/or measure particle properties. According to one embodiment, a detector for detecting particles comprises: a sensor for receiving radiation scattered by an ensemble of particles; and a processor for determining a physical parameter for the detector, or an optimal detection angle or a bound for an optimal detection angle, for measuring at least one moment or integrated moment of the ensemble of particles, the physical parameter, or detection angle, or detection angle bound being determined based on one or more of properties (a) and/or (b) and/or (c) and/or (d) or ranges for one or more of properties (a) and/or (b) and/or (c) and/or (d), wherein (a)-(d) are the following: (a) is a wavelength of light incident on the particles, (b) is a count median diameter or other characteristic size parameter of the particle size distribution, (c) is a standard deviation or other characteristic width parameter of the particle size distribution, and (d) is a refractive index of particles.

  20. Discrete Element Method Modeling of Bedload Transport: Towards a physics-based link between bed surface variability and particle entrainment statistics

    NASA Astrophysics Data System (ADS)

    Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.

    2017-12-01

    The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.

  1. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    NASA Astrophysics Data System (ADS)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  2. Geophysical Parameter Estimation of Near Surface Materials Using Nuclear Magnetic Resonance

    NASA Astrophysics Data System (ADS)

    Keating, K.

    2017-12-01

    Proton nuclear magnetic resonance (NMR), a mature geophysical technology used in petroleum applications, has recently emerged as a promising tool for hydrogeophysicists. The NMR measurement, which can be made in the laboratory, in boreholes, and using a surface based instrument, are unique in that it is directly sensitive to water, via the initial signal magnitude, and thus provides a robust estimate of water content. In the petroleum industry rock physics models have been established that relate NMR relaxation times to pore size distributions and permeability. These models are often applied directly for hydrogeophysical applications, despite differences in the material in these two environments (e.g., unconsolidated versus consolidated, and mineral content). Furthermore, the rock physics models linking NMR relaxation times to pore size distributions do not account for partially saturated systems that are important for understanding flow in the vadose zone. In our research, we are developing and refining quantitative rock physics models that relate NMR parameters to hydrogeological parameters. Here we highlight the limitations of directly applying established rock physics models to estimate hydrogeological parameters from NMR measurements, and show some of the successes we have had in model improvement. Using examples drawn from both laboratory and field measurements, we focus on the use of NMR in partial saturated systems to estimate water content, pore-size distributions, and the water retention curve. Despite the challenges in interpreting the measurements, valuable information about hydrogeological parameters can be obtained from NMR relaxation data, and we conclude by outlining pathways for improving the interpretation of NMR data for hydrogeophysical investigations.

  3. A "total parameter estimation" method in the varification of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  4. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  5. Sound propagation and absorption in foam - A distributed parameter model.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  6. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  7. WATER QUALITY EARLY WARNING SYSTEMS FOR SOURCE WATER AND DISTRIBUTION SYSTEM MONITORING

    EPA Science Inventory

    A variety of probes for use in continuous monitoring of water quality exist. They range from single parameter chemical/physical probes to comprehensive screening systems based on whole organism responses. Originally developed for monitoring specific characteristics of water qua...

  8. Size Distributions of Solar Proton Events: Methodological and Physical Restrictions

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, L. I.; Yanke, V. G.

    2016-12-01

    Based on the new catalogue of solar proton events (SPEs) for the period of 1997 - 2009 (Solar Cycle 23) we revisit the long-studied problem of the event-size distributions in the context of those constructed for other solar-flare parameters. Recent results on the problem of size distributions of solar flares and proton events are briefly reviewed. Even a cursory acquaintance with this research field reveals a rather mixed and controversial picture. We concentrate on three main issues: i) SPE size distribution for {>} 10 MeV protons in Solar Cycle 23; ii) size distribution of {>} 1 GV proton events in 1942 - 2014; iii) variations of annual numbers for {>} 10 MeV proton events on long time scales (1955 - 2015). Different results are critically compared; most of the studies in this field are shown to suffer from vastly different input datasets as well as from insufficient knowledge of underlying physical processes in the SPEs under consideration. New studies in this field should be made on more distinct physical and methodological bases. It is important to note the evident similarity in size distributions of solar flares and superflares in Sun-like stars.

  9. Open star clusters and Galactic structure

    NASA Astrophysics Data System (ADS)

    Joshi, Yogesh C.

    2018-04-01

    In order to understand the Galactic structure, we perform a statistical analysis of the distribution of various cluster parameters based on an almost complete sample of Galactic open clusters yet available. The geometrical and physical characteristics of a large number of open clusters given in the MWSC catalogue are used to study the spatial distribution of clusters in the Galaxy and determine the scale height, solar offset, local mass density and distribution of reddening material in the solar neighbourhood. We also explored the mass-radius and mass-age relations in the Galactic open star clusters. We find that the estimated parameters of the Galactic disk are largely influenced by the choice of cluster sample.

  10. Flare parameters inferred from a 3D loop model data base

    NASA Astrophysics Data System (ADS)

    Cuambe, Valente A.; Costa, J. E. R.; Simões, P. J. A.

    2018-06-01

    We developed a data base of pre-calculated flare images and spectra exploring a set of parameters which describe the physical characteristics of coronal loops and accelerated electron distribution. Due to the large number of parameters involved in describing the geometry and the flaring atmosphere in the model used, we built a large data base of models (˜250 000) to facilitate the flare analysis. The geometry and characteristics of non-thermal electrons are defined on a discrete grid with spatial resolution greater than 4 arcsec. The data base was constructed based on general properties of known solar flares and convolved with instrumental resolution to replicate the observations from the Nobeyama radio polarimeter spectra and Nobeyama radioheliograph (NoRH) brightness maps. Observed spectra and brightness distribution maps are easily compared with the modelled spectra and images in the data base, indicating a possible range of solutions. The parameter search efficiency in this finite data base is discussed. 8 out of 10 parameters analysed for 1000 simulated flare searches were recovered with a relative error of less than 20 per cent on average. In addition, from the analysis of the observed correlation between NoRH flare sizes and intensities at 17 GHz, some statistical properties were derived. From these statistics, the energy spectral index was found to be δ ˜ 3, with non-thermal electron densities showing a peak distribution ⪅107 cm-3, and Bphotosphere ⪆ 2000 G. Some bias for larger loops with heights as great as ˜2.6 × 109 cm, and looptop events were noted. An excellent match of the spectrum and the brightness distribution at 17 and 34 GHz of the 2002 May 31 flare is presented as well.

  11. Pattern dependence in high-speed Q-modulated distributed feedback laser.

    PubMed

    Zhu, Hongli; Xia, Yimin; He, Jian-Jun

    2015-05-04

    We investigate the pattern dependence in high speed Q-modulated distributed feedback laser based on its complete physical structure and material properties. The structure parameters of the gain section as well as the modulation and phase sections are all taken into account in the simulations based on an integrated traveling wave model. Using this model, we show that an example Q-modulated DFB laser can achieve an extinction ratio of 6.8dB with a jitter of 4.7ps and a peak intensity fluctuation of less than 15% for 40Gbps RZ modulation signal. The simulation method is proved very useful for the complex laser structure design and high speed performance optimization, as well as for providing physical insight of the operation mechanism.

  12. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  13. On the development of a new methodology in sub-surface parameterisation on the calibration of groundwater models

    NASA Astrophysics Data System (ADS)

    Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.

    2017-10-01

    In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.

  14. Temperature based Restricted Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping

    2016-01-01

    Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.

  15. Importance of factors determining the effective lifetime of a mass, long-lasting, insecticidal net distribution: a sensitivity analysis

    PubMed Central

    2012-01-01

    Background Long-lasting insecticidal nets (LLINs) reduce malaria transmission by protecting individuals from infectious bites, and by reducing mosquito survival. In recent years, millions of LLINs have been distributed across sub-Saharan Africa (SSA). Over time, LLINs decay physically and chemically and are destroyed, making repeated interventions necessary to prevent a resurgence of malaria. Because its effects on transmission are important (more so than the effects of individual protection), estimates of the lifetime of mass distribution rounds should be based on the effective length of epidemiological protection. Methods Simulation models, parameterised using available field data, were used to analyse how the distribution's effective lifetime depends on the transmission setting and on LLIN characteristics. Factors considered were the pre-intervention transmission level, initial coverage, net attrition, and both physical and chemical decay. An ensemble of 14 stochastic individual-based model variants for malaria in humans was used, combined with a deterministic model for malaria in mosquitoes. Results The effective lifetime was most sensitive to the pre-intervention transmission level, with a lifetime of almost 10 years at an entomological inoculation rate of two infectious bites per adult per annum (ibpapa), but of little more than 2 years at 256 ibpapa. The LLIN attrition rate and the insecticide decay rate were the next most important parameters. The lifetime was surprisingly insensitive to physical decay parameters, but this could change as physical integrity gains importance with the emergence and spread of pyrethroid resistance. Conclusions The strong dependency of the effective lifetime on the pre-intervention transmission level indicated that the required distribution frequency may vary more with the local entomological situation than with LLIN quality or the characteristics of the distribution system. This highlights the need for malaria monitoring both before and during intervention programmes, particularly since there are likely to be strong variations between years and over short distances. The majority of SSA's population falls into exposure categories where the lifetime is relatively long, but because exposure estimates are highly uncertain, it is necessary to consider subsequent interventions before the end of the expected effective lifetime based on an imprecise transmission measure. PMID:22244509

  16. Importance of factors determining the effective lifetime of a mass, long-lasting, insecticidal net distribution: a sensitivity analysis.

    PubMed

    Briët, Olivier J T; Hardy, Diggory; Smith, Thomas A

    2012-01-13

    Long-lasting insecticidal nets (LLINs) reduce malaria transmission by protecting individuals from infectious bites, and by reducing mosquito survival. In recent years, millions of LLINs have been distributed across sub-Saharan Africa (SSA). Over time, LLINs decay physically and chemically and are destroyed, making repeated interventions necessary to prevent a resurgence of malaria. Because its effects on transmission are important (more so than the effects of individual protection), estimates of the lifetime of mass distribution rounds should be based on the effective length of epidemiological protection. Simulation models, parameterised using available field data, were used to analyse how the distribution's effective lifetime depends on the transmission setting and on LLIN characteristics. Factors considered were the pre-intervention transmission level, initial coverage, net attrition, and both physical and chemical decay. An ensemble of 14 stochastic individual-based model variants for malaria in humans was used, combined with a deterministic model for malaria in mosquitoes. The effective lifetime was most sensitive to the pre-intervention transmission level, with a lifetime of almost 10 years at an entomological inoculation rate of two infectious bites per adult per annum (ibpapa), but of little more than 2 years at 256 ibpapa. The LLIN attrition rate and the insecticide decay rate were the next most important parameters. The lifetime was surprisingly insensitive to physical decay parameters, but this could change as physical integrity gains importance with the emergence and spread of pyrethroid resistance. The strong dependency of the effective lifetime on the pre-intervention transmission level indicated that the required distribution frequency may vary more with the local entomological situation than with LLIN quality or the characteristics of the distribution system. This highlights the need for malaria monitoring both before and during intervention programmes, particularly since there are likely to be strong variations between years and over short distances. The majority of SSA's population falls into exposure categories where the lifetime is relatively long, but because exposure estimates are highly uncertain, it is necessary to consider subsequent interventions before the end of the expected effective lifetime based on an imprecise transmission measure.

  17. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  18. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.

  19. Peridynamic thermal diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail

    This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.

  20. Hydrometeorological Analysis of Flooding Events in San Antonio, TX

    NASA Astrophysics Data System (ADS)

    Chintalapudi, S.; Sharif, H.; Elhassan, A.

    2008-12-01

    South Central Texas is particularly vulnerable to floods due to: proximity to a moist air source (the Gulf of Mexico); the Balcones Escarpment, which concentrates rainfall runoff; a tendency for synoptic scale features to become cut-off and stall over the area; and decaying tropical cyclones stalling over the area. The San Antonio Metropolitan Area is the 7th largest city in the nation, one of the most flash-flood prone regions in North America, and has experienced a number of flooding events in the last decade (1998, 2002, 2004, and 2007). Research is being conducted to characterize the meteorological conditions that lead to these events and apply the rainfall and watershed characteristics data to recreate the runoff events using a two- dimensional, physically-based, distributed-parameter hydrologic model. The physically based, distributed-parameter Gridded Surface Subsurface Hydrologic Analysis (GSSHA) hydrological model was used for simulating the watershed response to these storm events. Finally observed discharges were compared to GSSHA model discharges for these storm events. Analysis of the some of these events will be presented.

  1. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  2. The Impact of Uncertain Physical Parameters on HVAC Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai

    HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less

  3. Asymptotic formulae for likelihood-based tests of new physics

    NASA Astrophysics Data System (ADS)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  4. Wigner distributions for an electron

    NASA Astrophysics Data System (ADS)

    Kumar, Narinder; Mondal, Chandan

    2018-06-01

    We study the Wigner distributions for a physical electron, which reveal the multidimensional images of the electron. The physical electron is considered as a composite system of a bare electron and photon. The Wigner distributions for unpolarized, longitudinally polarized and transversely polarized electron are presented in transverse momentum plane as well as in impact-parameter plane. The spin-spin correlations between the bare electron and the physical electron are discussed. We also evaluate all the leading twist generalized transverse momentum distributions (GTMDs) for electron.

  5. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  6. Optimization for high-dose-rate brachytherapy of cervical cancer with adaptive simulated annealing and gradient descent.

    PubMed

    Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H

    2014-01-01

    To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  7. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.

  8. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  9. First-order exchange coefficient coupling for simulating surface water-groundwater interactions: Parameter sensitivity and consistency with a physics-based approach

    USGS Publications Warehouse

    Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.

    2009-01-01

    Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.

  10. Spatial and temporal distribution of benthic macroinvertebrates in a Southeastern Brazilian river.

    PubMed

    Silveira, M P; Buss, D F; Nessimian, J L; Baptista, D F

    2006-05-01

    Benthic macroinvertebrate assemblages are structured according to physical and chemical parameters that define microhabitats, including food supply, shelter to escape predators, and other biological parameters that influence reproductive success. The aim of this study is to investigate spatial and temporal distribution of macroinvertebrate assemblages at the Macaé river basin, in Rio de Janeiro state, Southeastern Brazil. According to the "Habitat Assessment Field Data Sheet--High Gradient Streams" (Barbour et al., 1999), the five sampling sites are considered as a reference condition. Despite the differences in hydrological parameters (mean width, depth and discharge) among sites, the physicochemical parameters and functional feeding groups' general structure were similar, except for the less impacted area, which showed more shredders. According to the Detrended Correspondence Analysis based on substrates, there is a clear distinction between pool and riffle assemblages. In fact, the riffle litter substrate had higher taxa in terms of richness and abundance, but the pool litter substrate had the greatest number of exclusive taxa. A Cluster Analysis based on sampling sites data showed that temporal variation was the main factor in structuring macroinvertebrate assemblages in the studied habitats.

  11. Energetic investigation of the adsorption process of CH4, C2H6 and N2 on activated carbon: Numerical and statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb

    2014-01-01

    The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.

  12. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  13. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  14. Volatility smile as relativistic effect

    NASA Astrophysics Data System (ADS)

    Kakushadze, Zura

    2017-06-01

    We give an explicit formula for the probability distribution based on a relativistic extension of Brownian motion. The distribution (1) is properly normalized and (2) obeys the tower law (semigroup property), so we can construct martingales and self-financing hedging strategies and price claims (options). This model is a 1-constant-parameter extension of the Black-Scholes-Merton model. The new parameter is the analog of the speed of light in Special Relativity. However, in the financial context there is no ;speed limit; and the new parameter has the meaning of a characteristic diffusion speed at which relativistic effects become important and lead to a much softer asymptotic behavior, i.e., fat tails, giving rise to volatility smiles. We argue that a nonlocal stochastic description of such (Lévy) processes is inadequate and discuss a local description from physics. The presentation is intended to be pedagogical.

  15. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  16. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  17. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  18. MODELING PHYSICAL HABITAT PARAMETERS

    EPA Science Inventory

    Salmonid populations can be affected by alterations in stream physical habitat. Fish productivity is determined by the stream's physical habitat structure ( channel form, substrate distribution, riparian vegetation), water quality, flow regime and inputs from the watershed (sedim...

  19. An enhanced lumped element electrical model of a double barrier memristive device

    NASA Astrophysics Data System (ADS)

    Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz

    2017-05-01

    The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.

  20. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  1. SU-E-T-109: An Investigation of Including Variable Relative Biological Effectiveness in Intensity Modulated Proton Therapy Planning Optimization for Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Zaghian, M; Lim, G

    2015-06-15

    Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less

  2. Statistical physics studies of multilayer adsorption isotherm in food materials and pore size distribution

    NASA Astrophysics Data System (ADS)

    Aouaini, F.; Knani, S.; Ben Yahia, M.; Ben Lamine, A.

    2015-08-01

    Water sorption isotherms of foodstuffs are very important in different areas of food science engineering such as for design, modeling and optimization of many processes. The equilibrium moisture content is an important parameter in models used to predict changes in the moisture content of a product during storage. A formulation of multilayer model with two energy levels was based on statistical physics and theoretical considerations. Thanks to the grand canonical ensemble in statistical physics. Some physicochemical parameters related to the adsorption process were introduced in the analytical model expression. The data tabulated in literature of water adsorption at different temperatures on: chickpea seeds, lentil seeds, potato and on green peppers were described applying the most popular models applied in food science. We also extend the study to the newest proposed model. It is concluded that among studied models the proposed model seems to be the best for description of data in the whole range of relative humidity. By using our model, we were able to determine the thermodynamic functions. The measurement of desorption isotherms, in particular a gas over a solid porous, allows access to the distribution of pore size PSD.

  3. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  4. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  5. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  6. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  7. MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems

    DTIC Science & Technology

    2007-05-03

    34Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Porne ", Parameter estimation for 3-parameter log-logistic distribu- tion...section V we physical security, air traffic control, traffic monitoring, andvidefaconu s cribedy. video surveillance, industrial automation etc. Each

  8. Integration of a Physically based Distributed Hydrological Model with a Model of Carbon and Nitrogen Cycling: A Case Study at the Luquillo Critical Zone Observatory, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Bastola, S.; Dialynas, Y. G.; Bras, R. L.; Arnone, E.; Noto, L. V.

    2015-12-01

    The dynamics of carbon and nitrogen cycles, increasingly influenced by human activities, are the key to the functioning of ecosystems. These cycles are influenced by the composition of the substrate, availability of nitrogen, the population of microorganisms, and by environmental factors. Therefore, land management and use, climate change, and nitrogen deposition patterns influence the dynamics of these macronutrients at the landscape scale. In this work a physically based distributed hydrological model, the tRIBS model, is coupled with a process-based multi-compartment model of the biogeochemical cycle to simulate the dynamics of carbon and nitrogen (CN) in the Mameyes River basin, Puerto Rico. The model includes a wide range of processes that influence the movement, production, alteration of nutrients in the landscape and factors that affect the CN cycling. The tRIBS integrates geomorphological and climatic factors that influence the cycling of CN in soil. Implementing the decomposition module into tRIBS makes the model a powerful complement to a biogeochemical observation system and a forecast tool able to analyze the influences of future changes on ecosystem services. The soil hydrologic parameters of the model were obtained using ranges of published parameters and observed streamflow data at the outlet. The parameters of the decomposition module are based on previously published data from studies conducted in the Luquillio CZO (budgets of soil organic matter and CN ratio for each of the dominant vegetation types across the landscape). Hydrological fluxes, wet depositon of nitrogen, litter fall and its corresponding CN ratio drive the decomposition model. The simulation results demonstrate a strong influence of soil moisture dynamics on the spatiotemporal distribution of nutrients at the landscape level. The carbon in the litter pool and the nitrate and ammonia pool respond quickly to soil moisture content. Moreover, the CN ratios of the plant litter have significant influence in the dynamics of CN cycling.

  9. Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics

    PubMed Central

    Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier

    2013-01-01

    Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528

  10. Root growth, water uptake, and sap flow of winter wheat in response to different soil water conditions

    NASA Astrophysics Data System (ADS)

    Cai, Gaochao; Vanderborght, Jan; Langensiepen, Matthias; Schnepf, Andrea; Hüging, Hubert; Vereecken, Harry

    2018-04-01

    How much water can be taken up by roots and how this depends on the root and water distributions in the root zone are important questions that need to be answered to describe water fluxes in the soil-plant-atmosphere system. Physically based root water uptake (RWU) models that relate RWU to transpiration, root density, and water potential distributions have been developed but used or tested far less. This study aims at evaluating the simulated RWU of winter wheat using the empirical Feddes-Jarvis (FJ) model and the physically based Couvreur (C) model for different soil water conditions and soil textures compared to sap flow measurements. Soil water content (SWC), water potential, and root development were monitored noninvasively at six soil depths in two rhizotron facilities that were constructed in two soil textures: stony vs. silty, with each of three water treatments: sheltered, rainfed, and irrigated. Soil and root parameters of the two models were derived from inverse modeling and simulated RWU was compared with sap flow measurements for validation. The different soil types and water treatments resulted in different crop biomass, root densities, and root distributions with depth. The two models simulated the lowest RWU in the sheltered plot of the stony soil where RWU was also lower than the potential RWU. In the silty soil, simulated RWU was equal to the potential uptake for all treatments. The variation of simulated RWU among the different plots agreed well with measured sap flow but the C model predicted the ratios of the transpiration fluxes in the two soil types slightly better than the FJ model. The root hydraulic parameters of the C model could be constrained by the field data but not the water stress parameters of the FJ model. This was attributed to differences in root densities between the different soils and treatments which are accounted for by the C model, whereas the FJ model only considers normalized root densities. The impact of differences in root density on RWU could be accounted for directly by the physically based RWU model but not by empirical models that use normalized root density functions.

  11. Generalized Finsler geometric continuum physics with applications in fracture and phase transformations

    NASA Astrophysics Data System (ADS)

    Clayton, J. D.

    2017-02-01

    A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.

  12. Evaluation of eutrophication of Ostravice river depending on the chemical and physical parameters

    NASA Astrophysics Data System (ADS)

    Hlavac, A.; Melcakova, I.; Novakova, J.; Svehlakova, H.; Slavikova, L.; Klimsa, L.; Bartkova, M.

    2017-10-01

    The main objective of this study was to evaluate which selected environmental parameters in rivers affect the concentration of chlorophyll a and the distribution of macrozoobenthos. The data were collected on selected profiles of the Ostravice mountain river in the Moravian-Silesian Region. The examined chemical and physical parameters include dissolved oxygen (DO), flow rate, oxidation-reduction potential (ORP), conductivity, temperature, pH, total nitrogen and phosphorus concentration.

  13. An internet graph model based on trade-off optimization

    NASA Astrophysics Data System (ADS)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  14. Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization

    NASA Astrophysics Data System (ADS)

    Eroglu, Sertac

    2014-10-01

    The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.

  15. mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris

    2018-02-01

    mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.

  16. Wave-height hazard analysis in Eastern Coast of Spain - Bayesian approach using generalized Pareto distribution

    NASA Astrophysics Data System (ADS)

    Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.

    2005-03-01

    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.

  17. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  18. Research on the control strategy of distributed energy resources inverter based on improved virtual synchronous generator.

    PubMed

    Gao, Changwei; Liu, Xiaoming; Chen, Hai

    2017-08-22

    This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.

  19. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  20. 4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Feza; Arikan, Orhan

    2016-07-01

    Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  1. Do morphometric parameters and geological conditions determine chemistry of glacier surface ice? Spatial distribution of contaminants present in the surface ice of Spitsbergen glaciers (European Arctic).

    PubMed

    Lehmann, Sara; Gajek, Grzegorz; Chmiel, Stanisław; Polkowska, Żaneta

    2016-12-01

    The chemism of the glaciers is strongly determined by long-distance transport of chemical substances and their wet and dry deposition on the glacier surface. This paper concerns spatial distribution of metals, ions, and dissolved organic carbon, as well as the differentiation of physicochemical parameters (pH, electrical conductivity) determined in ice surface samples collected from four Arctic glaciers during the summer season in 2012. The studied glaciers represent three different morphological types: ground based (Blomlibreen and Scottbreen), tidewater which evolved to ground based (Renardbreen), and typical tidewater glacier (Recherchebreen). All of the glaciers are functioning as a glacial system and hence are subject to the same physical processes (melting, freezing) and the process of ice flowing resulting from the cross-impact force of gravity and topographic conditions. According to this hypothesis, the article discusses the correlation between morphometric parameters, changes in mass balance, geological characteristics of the glaciers and the spatial distribution of analytes on the surface of ice. A strong correlation (r = 0.63) is recorded between the aspect of glaciers and values of pH and ions, whereas dissolved organic carbon (DOC) depends on the minimum elevation of glaciers (r = 0.55) and most probably also on the development of the accumulation area. The obtained results suggest that although certain morphometric parameters largely determine the spatial distribution of analytes, also the geology of the bed of glaciers strongly affects the chemism of the surface ice of glaciers in the phase of strong recession.

  2. Constraining the Physical Properties of Meteor Stream Particles by Light Curve Shapes Using the Virtual Meteor Observatory

    NASA Technical Reports Server (NTRS)

    Koschny, D.; Gritsevich, M.; Barentsen, G.

    2011-01-01

    Different authors have produced models for the physical properties of meteoroids based on the shape of a meteor's light curve, typically from short observing campaigns. We here analyze the height profiles and light curves of approx.200 double-station meteors from the Leonids and Perseids using data from the Virtual Meteor Observatory, to demonstrate that with this web-based meteor database it is possible to analyze very large datasets from different authors in a consistent way. We compute the average heights for begin point, maximum luminosity, and end heights for Perseids and Leonids. We also compute the skew of the light curve, usually called the F-parameter. The results compare well with other author's data. We display the average light curve in a novel way to assess the light curve shape in addition to using the F-parameter. While the Perseids show a peaked light curve, the average Leonid light curve has a more flat peak. This indicates that the particle distribution of Leonid meteors can be described by a Gaussian distribution; the Perseids can be described with a power law. The skew for Leonids is smaller than for Perseids, indicating that the Leonids are more fragile than the Perseids.

  3. Determination of Eros Physical Parameters for Near Earth Asteroid Rendezvous Orbit Phase Navigation

    NASA Technical Reports Server (NTRS)

    Miller, J. K.; Antreasian, P. J.; Georgini, J.; Owen, W. M.; Williams, B. G.; Yeomans, D. K.

    1995-01-01

    Navigation of the orbit phase of the Near Earth steroid Rendezvous (NEAR) mission will re,quire determination of certain physical parameters describing the size, shape, gravity field, attitude and inertial properties of Eros. Prior to launch, little was known about Eros except for its orbit which could be determined with high precision from ground based telescope observations. Radar bounce and light curve data provided a rough estimate of Eros shape and a fairly good estimate of the pole, prime meridian and spin rate. However, the determination of the NEAR spacecraft orbit requires a high precision model of Eros's physical parameters and the ground based data provides only marginal a priori information. Eros is the principal source of perturbations of the spacecraft's trajectory and the principal source of data for determining the orbit. The initial orbit determination strategy is therefore concerned with developing a precise model of Eros. The original plan for Eros orbital operations was to execute a series of rendezvous burns beginning on December 20,1998 and insert into a close Eros orbit in January 1999. As a result of an unplanned termination of the rendezvous burn on December 20, 1998, the NEAR spacecraft continued on its high velocity approach trajectory and passed within 3900 km of Eros on December 23, 1998. The planned rendezvous burn was delayed until January 3, 1999 which resulted in the spacecraft being placed on a trajectory that slowly returns to Eros with a subsequent delay of close Eros orbital operations until February 2001. The flyby of Eros provided a brief glimpse and allowed for a crude estimate of the pole, prime meridian and mass of Eros. More importantly for navigation, orbit determination software was executed in the landmark tracking mode to determine the spacecraft orbit and a preliminary shape and landmark data base has been obtained. The flyby also provided an opportunity to test orbit determination operational procedures that will be used in February of 2001. The initial attitude and spin rate of Eros, as well as estimates of reference landmark locations, are obtained from images of the asteroid. These initial estimates are used as a priori values for a more precise refinement of these parameters by the orbit determination software which combines optical measurements with Doppler tracking data to obtain solutions for the required parameters. As the spacecraft is maneuvered; closer to the asteroid, estimates of spacecraft state, asteroid attitude, solar pressure, landmark locations and Eros physical parameters including mass, moments of inertia and gravity harmonics are determined with increasing precision. The determination of the elements of the inertia tensor of the asteroid is critical to spacecraft orbit determination and prediction of the asteroid attitude. The moments of inertia about the principal axes are also of scientific interest since they provide some insight into the internal mass distribution. Determination of the principal axes moments of inertia will depend on observing free precession in the asteroid's attitude dynamics. Gravity harmonics are in themselves of interest to science. When compared with the asteroid shape, some insight may be obtained into Eros' internal structure. The location of the center of mass derived from the first degree harmonic coefficients give a direct indication of overall mass distribution. The second degree harmonic coefficients relate to the radial distribution of mass. Higher degree harmonics may be compared with surface features to gain additional insight into mass distribution. In this paper, estimates of Eros physical parameters obtained from the December 23,1998 flyby will be presented. This new knowledge will be applied to simplification of Eros orbital operations in February of 2001. The resulting revision to the orbit determination strategy will also be discussed.

  4. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  5. The value of oxygen-isotope data and multiple discharge records in calibrating a fully-distributed, physically-based rainfall-runoff model (CRUM3) to improve predictive capability

    NASA Astrophysics Data System (ADS)

    Neill, Aaron; Reaney, Sim

    2015-04-01

    Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.

  6. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  7. iSEDfit: Bayesian spectral energy distribution modeling of galaxies

    NASA Astrophysics Data System (ADS)

    Moustakas, John

    2017-08-01

    iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone. After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.

  8. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  9. Using a physically-based transit time distribution function to estimate the hydraulic parameters and hydraulic transit times of an unconfined aquifer from tritium measurements

    NASA Astrophysics Data System (ADS)

    Farlin, Julien; Maloszewski, Piotr; Schneider, Wilfried; Gallé, Tom

    2014-05-01

    Groundwater transit time is of interest in environmental studies pertaining to the transport of pollutants from its source to the aquifer outlet (spring or pumping well) or to an observation well. Different models have been proposed to describe the distribution of transit times within groundwatersheds, the most common being the dispersion model, the exponential-piston-flow model (EPM) both proposed by Maloszewski and Zuber (Maloszewski and Zuber, 1982) and the (two or three parameter) gamma model (Amin and Campana, 1996; Kirchner et al., 1999). Choosing which function applies best is a recurrent and controversial problem in hydrogeology. The object of this study is to revisit the applicability of the EPM for unconfined aquifers, and to introduce an alternative model based explicitly on groundwater hydraulics. The alternative model is based on the transit time of water from any point at the groundwater table to the aquifer outlet, and is used to calculate inversely the hydraulic parameters of a fractured unconfined sandstone aquifer from tritium measurements made in a series of contact springs. This model is compared to the EPM, which is usually adopted to describe the transit time distribution of confined and unconfined aquifers alike. Both models are tested against observations, and it is shown that the EPM fails the test for some of the springs, and generally seems to overestimate the older water component. Amin, I. E., and M. E. Campana (1996), A general lumped parameter model for the interpretation of tracer data and transit time calculation in hydrologic systems, Journal of Hydrology, 179, 1-21, doi: 10.1016/0022-1694(95)02880-3. Kirchner, J. W., X. H. Feng, and C. Neal (1999), Fractal stream chemistry and its implications for contaminant transport in catchments, Nature physics, 403, 524-527, doi: 10.1038/35000537. Maloszewski, P., and A. Zuber (1982), Determining the turnover time of groundwater systems with the aid of environmental tracers, Journal of Hydrology, 57, 207-231, doi: 10.1016/0022-1694(82)90147-0.

  10. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 2: Retrieval method and applications (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.

    1990-01-01

    A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.

  11. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  12. A Bayesian network for modelling blood glucose concentration and exercise in type 1 diabetes.

    PubMed

    Ewings, Sean M; Sahu, Sujit K; Valletta, John J; Byrne, Christopher D; Chipperfield, Andrew J

    2015-06-01

    This article presents a new statistical approach to analysing the effects of everyday physical activity on blood glucose concentration in people with type 1 diabetes. A physiologically based model of blood glucose dynamics is developed to cope with frequently sampled data on food, insulin and habitual physical activity; the model is then converted to a Bayesian network to account for measurement error and variability in the physiological processes. A simulation study is conducted to determine the feasibility of using Markov chain Monte Carlo methods for simultaneous estimation of all model parameters and prediction of blood glucose concentration. Although there are problems with parameter identification in a minority of cases, most parameters can be estimated without bias. Predictive performance is unaffected by parameter misspecification and is insensitive to misleading prior distributions. This article highlights important practical and theoretical issues not previously addressed in the quest for an artificial pancreas as treatment for type 1 diabetes. The proposed methods represent a new paradigm for analysis of deterministic mathematical models of blood glucose concentration. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. Multi-Agent Architecture with Support to Quality of Service and Quality of Control

    NASA Astrophysics Data System (ADS)

    Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique

    Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.

  14. The Design and Analysis of Helium Turbine Expander Impeller with a Given All-Over-Controlled Vortex Distribution

    NASA Astrophysics Data System (ADS)

    Liu, Xiaodong; Fu, Bao; Zhuang, Ming

    2014-03-01

    To make the large-scale helium cryogenic system of fusion device EAST (experimental advanced super-conducting tokamak) run stably, as the core part, the helium turbine expander must meet the requirement of refrigeration capacity. However, previous designs were based on one dimension flow to determine the average fluid parameters and geometric parameters of impeller cross-sections, so that it could not describe real physical processes in the internal flow of the turbine expander. Therefore, based on the inverse proposition of streamline curvature method in the context of quasi-three-dimensional flows, the all-over-controlled vortex concept was adopted to design the impeller under specified condition. The wrap angle of the impeller blade and the whole flow distribution on the meridian plane were obtained; meanwhile the performance of the designed impeller was analyzed. Thus a new design method is proposed here for the inverse proposition of the helium turbine expander impeller.

  15. Collective motion in prolate γ-rigid nuclei within minimal length concept via a quantum perturbation method

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2018-05-01

    Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.

  16. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  17. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  18. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  19. Micro-Physical characterisation of Convective & Stratiform Rainfall at Tropics

    NASA Astrophysics Data System (ADS)

    Sreekanth, T. S.

    Large Micro-Physical characterisation of Convective & Stratiform Rainfall at Tropics begin{center} begin{center} Sreekanth T S*, Suby Symon*, G. Mohan Kumar (1) , and V Sasi Kumar (2) *Centre for Earth Science Studies, Akkulam, Thiruvananthapuram (1) D-330, Swathi Nagar, West Fort, Thiruvananthapuram 695023 (2) 32. NCC Nagar, Peroorkada, Thiruvananthapuram ABSTRACT Micro-physical parameters of rainfall such as rain drop size & fall speed distribution, mass weighted mean diameter, Total no. of rain drops, Normalisation parameters for rain intensity, maximum & minimum drop diameter from different rain intensity ranges, from both stratiform and convective rain events were analysed. Convective -Stratiform classification was done by the method followed by Testud et al (2001) and as an additional information electrical behaviour of clouds from Atmospheric Electric Field Mill was also used. Events which cannot be included in both types are termed as 'mixed precipitation' and identified separately. For the three years 2011, 2012 & 2013, rain events from both convective & stratiform origin are identified from three seasons viz Pre-Monsoon (March-May), Monsoon (June-September) and Post-Monsoon (October-December). Micro-physical characterisation was done for each rain events and analysed. Ground based and radar observations were made and classification of stratiform and convective rainfall was done by the method followed by Testud et al (2001). Radar bright band and non bright band analysis was done for confimation of stratifom and convective rain respectievely. Atmospheric electric field data from electric field mill is also used for confirmation of convection during convective events. Statistical analyses revealed that the standard deviation of rain drop size in higher rain rates are higher than in lower rain rates. Normalised drop size distribution is ploted for selected events from both forms. Inter relations between various precipitation parameters were analysed in three seasons.

  20. Distribution-Connected PV's Response to Voltage Sags at Transmission-Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry; Ding, Fei

    The ever increasing amount of residential- and commercial-scale distribution-connected PV generation being installed and operated on the U.S.'s electric power system necessitates the use of increased fidelity representative distribution system models for transmission stability studies in order to ensure the continued safe and reliable operation of the grid. This paper describes a distribution model-based analysis that determines the amount of distribution-connected PV that trips off-line for a given voltage sag seen at the distribution circuit's substation. Such sags are what could potentially be experienced over a wide area of an interconnection during a transmission-level line fault. The results of thismore » analysis show that the voltage diversity of the distribution system does cause different amounts of PV generation to be lost for differing severity of voltage sags. The variation of the response is most directly a function of the loading of the distribution system. At low load levels the inversion of the circuit's voltage profile results in considerable differences in the aggregated response of distribution-connected PV Less variation is seen in the response to specific PV deployment scenarios, unless pushed to extremes, and in the total amount of PV penetration attained. A simplified version of the combined CMPLDW and PVD1 models is compared to the results from the model-based analysis. Furthermore, the parameters of the simplified model are tuned to better match the determined response. The resulting tuning parameters do not match the expected physical model of the distribution system and PV systems and thus may indicate that another modeling approach would be warranted.« less

  1. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package

    PubMed Central

    Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.

    2015-01-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009

  2. Energy reconstruction in the long-baseline neutrino experiment.

    PubMed

    Mosel, U; Lalakulich, O; Gallmeister, K

    2014-04-18

    The Long-Baseline Neutrino Experiment aims at measuring fundamental physical parameters to high precision and exploring physics beyond the standard model. Nuclear targets introduce complications towards that aim. We investigate the uncertainties in the energy reconstruction, based on quasielastic scattering relations, due to nuclear effects. The reconstructed event distributions as a function of energy tend to be smeared out and shifted by several 100 MeV in their oscillatory structure if standard event selection is used. We show that a more restrictive experimental event selection offers the possibility to reach the accuracy needed for a determination of the mass ordering and the CP-violating phase. Quasielastic-based energy reconstruction could thus be a viable alternative to the calorimetric reconstruction also at higher energies.

  3. Using a GIS to link digital spatial data and the precipitation-runoff modeling system, Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Battaglin, William A.; Kuhn, Gerhard; Parker, Randolph S.

    1993-01-01

    The U.S. Geological Survey Precipitation-Runoff Modeling System, a modular, distributed-parameter, watershed-modeling system, is being applied to 20 smaller watersheds within the Gunnison River basin. The model is used to derive a daily water balance for subareas in a watershed, ultimately producing simulated streamflows that can be input into routing and accounting models used to assess downstream water availability under current conditions, and to assess the sensitivity of water resources in the basin to alterations in climate. A geographic information system (GIS) is used to automate a method for extracting physically based hydrologic response unit (HRU) distributed parameter values from digital data sources, and for the placement of those estimates into GIS spatial datalayers. The HRU parameters extracted are: area, mean elevation, average land-surface slope, predominant aspect, predominant land-cover type, predominant soil type, average total soil water-holding capacity, and average water-holding capacity of the root zone.

  4. Dynamics of a distributed drill string system: Characteristic parameters and stability maps

    NASA Astrophysics Data System (ADS)

    Aarsnes, Ulf Jakob F.; van de Wouw, Nathan

    2018-03-01

    This paper involves the dynamic (stability) analysis of distributed drill-string systems. A minimal set of parameters characterizing the linearized, axial-torsional dynamics of a distributed drill string coupled through the bit-rock interaction is derived. This is found to correspond to five parameters for a simple drill string and eight parameters for a two-sectioned drill-string (e.g., corresponding to the pipe and collar sections of a drilling system). These dynamic characterizations are used to plot the inverse gain margin of the system, parametrized in the non-dimensional parameters, effectively creating a stability map covering the full range of realistic physical parameters. This analysis reveals a complex spectrum of dynamics not evident in stability analysis with lumped models, thus indicating the importance of analysis using distributed models. Moreover, it reveals trends concerning stability properties depending on key system parameters useful in the context of system and control design aiming at the mitigation of vibrations.

  5. Physical and Hydrological Meaning of the Spectral Information from Hydrodynamic Signals at Karst Springs

    NASA Astrophysics Data System (ADS)

    Dufoyer, A.; Lecoq, N.; Massei, N.; Marechal, J. C.

    2017-12-01

    Physics-based modeling of karst systems remains almost impossible without enough accurate information about the inner physical characteristics. Usually, the only available hydrodynamic information is the flow rate at the karst outlet. Numerous works in the past decades have used and proven the usefulness of time-series analysis and spectral techniques applied to spring flow, precipitations or even physico-chemical parameters, for interpreting karst hydrological functioning. However, identifying or interpreting the karst systems physical features that control statistical or spectral characteristics of spring flow variations is still challenging, not to say sometimes controversial. The main objective of this work is to determine how the statistical and spectral characteristics of the hydrodynamic signal at karst springs can be related to inner physical and hydraulic properties. In order to address this issue, we undertake an empirical approach based on the use of both distributed and physics-based models, and on synthetic systems responses. The first step of the research is to conduct a sensitivity analysis of time-series/spectral methods to karst hydraulic and physical properties. For this purpose, forward modeling of flow through several simple, constrained and synthetic cases in response to precipitations is undertaken. It allows us to quantify how the statistical and spectral characteristics of flow at the outlet are sensitive to changes (i) in conduit geometries, and (ii) in hydraulic parameters of the system (matrix/conduit exchange rate, matrix hydraulic conductivity and storativity). The flow differential equations resolved by MARTHE, a computer code developed by the BRGM, allows karst conduits modeling. From signal processing on simulated spring responses, we hope to determine if specific frequencies are always modified, thanks to Fourier series and multi-resolution analysis. We also hope to quantify which parameters are the most variable with auto-correlation analysis: first results seem to show higher variations due to conduit conductivity than the ones due to matrix/conduit exchange rate. Future steps will be using another computer code, based on double-continuum approach and allowing turbulent conduit flow, and modeling a natural system.

  6. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  7. Mathematical modelling of flow distribution in the human cardiovascular system

    NASA Technical Reports Server (NTRS)

    Sud, V. K.; Srinivasan, R. S.; Charles, J. B.; Bungo, M. W.

    1992-01-01

    The paper presents a detailed model of the entire human cardiovascular system which aims to study the changes in flow distribution caused by external stimuli, changes in internal parameters, or other factors. The arterial-venous network is represented by 325 interconnected elastic segments. The mathematical description of each segment is based on equations of hydrodynamics and those of stress/strain relationships in elastic materials. Appropriate input functions provide for the pumping of blood by the heart through the system. The analysis employs the finite-element technique which can accommodate any prescribed boundary conditions. Values of model parameters are from available data on physical and rheological properties of blood and blood vessels. As a representative example, simulation results on changes in flow distribution with changes in the elastic properties of blood vessels are discussed. They indicate that the errors in the calculated overall flow rates are not significant even in the extreme case of arteries and veins behaving as rigid tubes.

  8. Inversion of scattered radiance horizon profiles for gaseous concentrations and aerosol parameters

    NASA Technical Reports Server (NTRS)

    Malchow, H. L.; Whitney, C. K.

    1977-01-01

    Techniques have been developed and used to invert limb scan measurements for vertical profiles of atmospheric state parameters. The parameters which can be found are concentrations of Rayleigh scatters, ozone, NO2, and aerosols, and aerosol physical properties including a Junge-size distribution parameter and real and imaginary parts of the index of refraction.

  9. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.

  10. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less

  11. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    NASA Astrophysics Data System (ADS)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  12. Electrical conductivity modeling and experimental study of densely packed SWCNT networks.

    PubMed

    Jack, D A; Yeh, C-S; Liang, Z; Li, S; Park, J G; Fielding, J C

    2010-05-14

    Single-walled carbon nanotube (SWCNT) networks have become a subject of interest due to their ability to support structural, thermal and electrical loadings, but to date their application has been hindered due, in large part, to the inability to model macroscopic responses in an industrial product with any reasonable confidence. This paper seeks to address the relationship between macroscale electrical conductivity and the nanostructure of a dense network composed of SWCNTs and presents a uniquely formulated physics-based computational model for electrical conductivity predictions. The proposed model incorporates physics-based stochastic parameters for the individual nanotubes to construct the nanostructure such as: an experimentally obtained orientation distribution function, experimentally derived length and diameter distributions, and assumed distributions of chirality and registry of individual CNTs. Case studies are presented to investigate the relationship between macroscale conductivity and nanostructured variations in the bulk stochastic length, diameter and orientation distributions. Simulation results correspond nicely with those available in the literature for case studies of conductivity versus length and conductivity versus diameter. In addition, predictions for the increasing anisotropy of the bulk conductivity as a function of the tube orientation distribution are in reasonable agreement with our experimental results. Examples are presented to demonstrate the importance of incorporating various stochastic characteristics in bulk conductivity predictions. Finally, a design consideration for industrial applications is discussed based on localized network power emission considerations and may lend insight to the design engineer to better predict network failure under high current loading applications.

  13. Network-based Modeling of Mesoscale Catchments - The Hydrology Perspective of Glowa-danube

    NASA Astrophysics Data System (ADS)

    Ludwig, R.; Escher-Vetter, H.; Hennicker, R.; Mauser, W.; Niemeyer, S.; Reichstein, M.; Tenhunen, J.

    Within the GLOWA initiative of the German Ministry for Research and Educa- tion (BMBF), the project GLOWA-Danube is funded to establish a transdisciplinary network-based decision support tool for water related issues in the Upper Danube wa- tershed. It aims to develop and validate integration techniques, integrated models and integrated monitoring procedures and to implement them in the network-based De- cision Support System DANUBIA. An accurate description of processes involved in energy, water and matter fluxes and turnovers requires an intense collaboration and exchange of water related expertise of different scientific disciplines. DANUBIA is conceived as a distributed expert network and is developed on the basis of re-useable, refineable, and documented sub-models. In order to synthesize a common understand- ing between the project partners, a standardized notation of parameters and functions and a platform-independent structure of computational methods and interfaces has been established using the Unified Modeling Language UML. DANUBIA is object- oriented, spatially distributed and raster-based at its core. It applies the concept of "proxels" (Process Pixel) as its basic object, which has different dimensions depend- ing on the viewing scale and connects to its environment through fluxes. The presented study excerpts the hydrological view point of GLOWA-Danube, its approach of model coupling and network based communication (using the Remote Method Invocation RMI), the object-oriented technology to simulate physical processes and interactions at the land surface and the methodology to treat the issue of spatial and temporal scal- ing in large, heterogeneous catchments. The mechanisms applied to communicate data and model parameters across the typical discipline borders will be demonstrated from the perspective of a land-surface object, which comprises the capabilities of interde- pendent expert models for snowmelt, soil water movement, runoff formation, plant growth and radiation balance in a distributed JAVA-based modeling environment. The coupling to the adjacent physical objects of atmosphere, groundwater and river net- work will also be addressed.

  14. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-04-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  15. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-07-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  16. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  17. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  18. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  19. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  20. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  1. Robust/optimal temperature profile control of a high-speed aerospace vehicle using neural networks.

    PubMed

    Yadav, Vivek; Padhi, Radhakant; Balakrishnan, S N

    2007-07-01

    An approximate dynamic programming (ADP)-based suboptimal neurocontroller to obtain desired temperature for a high-speed aerospace vehicle is synthesized in this paper. A 1-D distributed parameter model of a fin is developed from basic thermal physics principles. "Snapshot" solutions of the dynamics are generated with a simple dynamic inversion-based feedback controller. Empirical basis functions are designed using the "proper orthogonal decomposition" (POD) technique and the snapshot solutions. A low-order nonlinear lumped parameter system to characterize the infinite dimensional system is obtained by carrying out a Galerkin projection. An ADP-based neurocontroller with a dual heuristic programming (DHP) formulation is obtained with a single-network-adaptive-critic (SNAC) controller for this approximate nonlinear model. Actual control in the original domain is calculated with the same POD basis functions through a reverse mapping. Further contribution of this paper includes development of an online robust neurocontroller to account for unmodeled dynamics and parametric uncertainties inherent in such a complex dynamic system. A neural network (NN) weight update rule that guarantees boundedness of the weights and relaxes the need for persistence of excitation (PE) condition is presented. Simulation studies show that in a fairly extensive but compact domain, any desired temperature profile can be achieved starting from any initial temperature profile. Therefore, the ADP and NN-based controllers appear to have the potential to become controller synthesis tools for nonlinear distributed parameter systems.

  2. Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.

    2010-11-01

    Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.

  3. The physicochemical characterization and in vitro/in vivo evaluation of natural surfactants-based emulsions as vehicles for diclofenac diethylamine.

    PubMed

    Vucinić-Milanković, Nada; Savić, Snezana; Vuleta, Gordana; Vucinić, Slavica

    2007-03-01

    Two sugar-based emulsifiers, cetearyl alcohol & cetearyl glycoside and sorbitan stearate & sucrose cocoate, known as potential promoters of lamellar liquid crystals/gel phases, were investigated in order to formulate an optimal vehicle for amphiphilic drug - diclofenac diethylamine (DDA). Physico-chemical characterization and study of vehicle's physical stability were performed. Then, the in vitro DDA liberation profile, dependent on the mode of drug incorporation to the system, and the in vivo, short-term effects of chosen samples on skin parameters were examined. Droplets size distribution and rheological behavior indicated satisfying physical stability of both types of vehicles. Unexpectedly, the manner of DDA incorporation to the system had no significant influence on DDA release. In vivo study pointed to emulsion's favorable potential for skin hydration and barrier improvement, particularly in cetearyl glycoside-based vehicle.

  4. Nonequilibrium approach regarding metals from a linearised kappa distribution

    NASA Astrophysics Data System (ADS)

    Domenech-Garret, J. L.

    2017-10-01

    The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.

  5. Spectroscopic observations of the extended corona during the SOHO whole sun month

    NASA Technical Reports Server (NTRS)

    Strachan, L.; Raymond, J. C.; Panasyuk, A. V.; Fineschi, S.; Gardner, L. D.; Antonucci, E.; Giordano, S.; Romoli, M.; Noci, G.; Kohl, J. L.

    1997-01-01

    The spatial distribution of plasma parameters in the extended corona, derived from the ultraviolet coronagraph spectrometer (UVCS) onboard the Solar and Heliospheric Observatory (SOHO), was investigated. The observations were carried out during the SOHO whole month campaign. Daily coronal scans in the H I Lyman alpha and O VI lambda-lambda 1032 A and 1037 A were used. Maps of outflow velocities of O(5+), based on Doppler dimming of the O VI lines, are discussed. The velocity distribution widths of O(5+) are shown to be a clear signature of coronal holes while the velocity distributions for H(0) show a much smaller effect. The possible physical explanations for some of the observed features are discussed.

  6. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  7. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    NASA Astrophysics Data System (ADS)

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.

  8. Time variability of viscosity parameter in differentially rotating discs

    NASA Astrophysics Data System (ADS)

    Rajesh, S. R.; Singh, Nishant K.

    2014-07-01

    We propose a mechanism to produce fluctuations in the viscosity parameter (α) in differentially rotating discs. We carried out a nonlinear analysis of a general accretion flow, where any perturbation on the background α was treated as a passive/slave variable in the sense of dynamical system theory. We demonstrate a complete physical picture of growth, saturation and final degradation of the perturbation as a result of the nonlinear nature of coupled system of equations. The strong dependence of this fluctuation on the radial location in the accretion disc and the base angular momentum distribution is demonstrated. The growth of fluctuations is shown to have a time scale comparable to the radial drift time and hence the physical significance is discussed. The fluctuation is found to be a power law in time in the growing phase and we briefly discuss its statistical significance.

  9. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Izacard, Olivier

    2016-08-01

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.

  10. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier, E-mail: izacard@llnl.gov

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  11. On the signatures of companion formation in the spectral energy distributions of Sz54 and Sz59—the young stars with protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Zakhozhay, O. V.

    2017-07-01

    We study spectral energy distributions of two young systems Sz54 and Sz59, that belong to Chameleon II star forming region. The results of the modeling indicate that protoplanetary disks of these systems contain gaps in the dust component. These gaps could be a result of a planetary or brown dwarf companion formation, because the companion would accumulate a disk material, moving along its orbit. In a present work we have determined physical characteristics of the disks. We also discuss possible companion characteristics, based on the geometrical parameters of the gaps.

  12. Integration of quantum key distribution and private classical communication through continuous variable

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  13. Investigation of pore size and energy distributions by statistical physics formalism applied to agriculture products

    NASA Astrophysics Data System (ADS)

    Aouaini, Fatma; Knani, Salah; Yahia, Manel Ben; Bahloul, Neila; Ben Lamine, Abdelmottaleb; Kechaou, Nabil

    2015-12-01

    In this paper, we present a new investigation that allows determining the pore size distribution (PSD) in a porous medium. This PSD is achieved by using the desorption isotherms of four varieties of olive leaves. This is by the means of statistical physics formalism and Kelvin's law. The results are compared with those obtained with scanning electron microscopy. The effect of temperature on the distribution function of pores has been studied. The influence of each parameter on the PSD is interpreted. A similar function of adsorption energy distribution, AED, is deduced from the PSD.

  14. Dynamic Wireless Network Based on Open Physical Layer

    DTIC Science & Technology

    2011-02-18

    would yield the error- exponent optimal solutions. We solved this problem, and the detailed works are reported in [?]. It turns out that when Renyi ...is, during the communication session. A natural set of metrics of interests are the family of Renyi divergences. With a parameter of α that can be...tuned, Renyi entropy of a given distribution corresponds to the Shannon entropy, at α = 1, to the probability of detection error, at α =∞. This gives a

  15. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.

  16. Time-frequency analysis of acoustic scattering from elastic objects

    NASA Astrophysics Data System (ADS)

    Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.

    1990-06-01

    A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.

  17. Characterisation of the physico-mechanical parameters of MSW.

    PubMed

    Stoltz, Guillaume; Gourc, Jean-Pierre; Oxarango, Laurent

    2010-01-01

    Following the basics of soil mechanics, the physico-mechanical behaviour of municipal solid waste (MSW) can be defined through constitutive relationships which are expressed with respect to three physical parameters: the dry density, the porosity and the gravimetric liquid content. In order to take into account the complexity of MSW (grain size distribution and heterogeneity larger than for conventional soils), a special oedometer was designed to carry out laboratory experiments. This apparatus allowed a coupled measurement of physical parameters for MSW settlement under stress. The studied material was a typical sample of fresh MSW from a French landfill. The relevant physical parameters were measured using a gas pycnometer. Moreover, the compressibility of MSW was studied with respect to the initial gravimetric liquid content. Proposed methods to assess the set of three physical parameters allow a relevant understanding of the physico-mechanical behaviour of MSW under compression, specifically, the evolution of the limit liquid content. The present method can be extended to any type of MSW. 2010 Elsevier Ltd. All rights reserved.

  18. JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) data availability, version 1-94

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea-surface height, surface-wind vector, sea-surface temperature, atmospheric liquid water, and integrated water vapor. The JPL PO.DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and is the United States distribution site for Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  19. Influence of mesh structure on 2D full shallow water equations and SCS Curve Number simulation of rainfall/runoff events

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier

    2012-07-01

    SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.

  20. Spatial variation of statistical properties of extreme water levels along the eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Pindsoo, Katri; Soomere, Tarmo; Rocha, Eugénio

    2016-04-01

    Most of existing projections of future extreme water levels rely on the use of classic generalised extreme value distributions. The choice to use a particular distribution is often made based on the absolute value of the shape parameter of the Generalise Extreme Value distribution. If this parameter is small, the Gumbel distribution is most appropriate while in the opposite case the Weibull or Frechet distribution could be used. We demonstrate that the alongshore variation in the statistical properties of numerically simulated high water levels along the eastern coast of the Baltic Sea is so large that the use of a single distribution for projections of extreme water levels is highly questionable. The analysis is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. The output of the Rossby Centre Ocean model is sampled with a resolution of 6 h and the output of the circulation model NEMO with a resolution of 1 h. As the maxima of water levels of subsequent years may be correlated in the Baltic Sea, we also employ maxima for stormy seasons. We provide a detailed analysis of spatial variation of the parameters of the family of extreme value distributions along an approximately 600 km long coastal section from the north-western shore of Latvia in the Baltic Proper until the eastern Gulf of Finland. The parameters are evaluated using maximum likelihood method and method of moments. The analysis also covers the entire Gulf of Riga. The core parameter of this family of distributions, the shape parameter of the Generalised Extreme Value distribution, exhibits extensive variation in the study area. Its values evaluated using the Hydrognomon software and maximum likelihood method, vary from about -0.1 near the north-western coast of Latvia in the Baltic Proper up to about 0.05 in the eastern Gulf of Finland. This parameter is very close to zero near Tallinn in the western Gulf of Finland. Thus, it is natural that the Gumbel distribution gives adequate projections of extreme water levels for the vicinity of Tallinn. More importantly, this feature indicates that the use of a single distribution for the projections of extreme water levels and their return periods for the entire Baltic Sea coast is inappropriate. The physical reason is the interplay of the complex shape of large subbasins (such as the Gulf of Riga and Gulf of Finland) of the sea and highly anisotropic wind regime. The 'impact' of this anisotropy on the statistics of water level is amplified by the overall anisotropy of the distributions of the frequency of occurrence of high and low water levels. The most important conjecture is that long-term behaviour of water level extremes in different coastal sections of the Baltic Sea may be fundamentally different.

  1. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    NASA Astrophysics Data System (ADS)

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.

  2. Use of Climatic Information In Regional Water Resources Assessment

    NASA Astrophysics Data System (ADS)

    Claps, P.

    Relations between climatic parameters and hydrological variables at the basin scale are investigated, with the aim of evaluating in a parsimonious way physical parameters useful both for a climatic classification of an area and for supporting statistical models of water resources assessment. With reference to the first point, literature methods for distributed evaluation of parameters such as temperature, global and net solar radiation, precipitation, have been considered at the annual scale with the aim of considering the viewpoint of the robust evaluation of parameters based on few basic physical variables of simple determination. Elevation, latitude and average annual number of sunny days have demonstrated to be the essential parameters with respect to the evaluation of climatic indices related to the soil water deficit and to the radiative balance. The latter term was evaluated at the monthly scale and validated (in the `global' term) with measured data. in questo caso riferite al bilancio idrico a scala annuale. Budyko, Thornthwaite and Emberger climatic indices were evaluated on the 10,000 km2 territory of the Basilicata region (southern Italy) based on a 1.1. km grid. They were compared in terms of spatial variability and sensitivity to the variation of the basic variables in humid and semi-arid areas. The use of the climatic index data with respect to statistical parameters of the runoff series in some gauging stations of the region demonstrated the possibility to support regionalisation of the annual runoff using climatic information, with clear distinction of the variability of the coefficient of variation in terms of the humidity-aridity of the basin.

  3. Chemical and isotopic methods for quantifying ground-water recharge in a regional, semiarid environment

    USGS Publications Warehouse

    Wood, Warren W.; Sanford, Ward E.

    1995-01-01

    The High Plains aquifer underlying the semiarid Southern High Plains of Texas and New Mexico, USA was used to illustrate solute and isotopic methods for evaluating recharge fluxes, runoff, and spatial and temporal distribution of recharge. The chloride mass-balance method can provide, under certain conditions, a time-integrated technique for evaluation of recharge flux to regional aquifers that is independent of physical parameters. Applying this method to the High Plains aquifer of the Southern High Plains suggests that recharge flux is approximately 2% of precipitation, or approximately 11 ± 2 mm/y, consistent with previous estimates based on a variety of physically based measurements. The method is useful because long-term average precipitation and chloride concentrations in rain and ground water have less uncertainty and are generally less expensive to acquire than physically based parameters commonly used in analyzing recharge. Spatial and temporal distribution of recharge was evaluated by use of δ2H, δ18O, and tritium concentrations in both ground water and the unsaturated zone. Analyses suggest that nearly half of the recharge to the Southern High Plains occurs as piston flow through playa basin floors that occupy approximately 6% of the area, and that macropore recharge may be important in the remaining recharge. Tritium and chloride concentrations in the unsaturated zone were used in a new equation developed to quantify runoff. Using this equation and data from a representative basin, runoff was found to be 24 ± 3 mm/y; that is in close agreement with values obtained from water-balance measurements on experimental watersheds in the area. Such geochemical estimates are possible because tritium is used to calculate a recharge flux that is independent of precipitation and runoff, whereas recharge flux based on chloride concentration in the unsaturated zone is dependent upon the amount of runoff. The difference between these two estimates yields the amount of runoff to the basin.

  4. Interactions of solutes and streambed sediment: 2. A dynamic analysis of coupled hydrologic and chemical processes that determine solute transport

    USGS Publications Warehouse

    Bencala, Kenneth E.

    1984-01-01

    Solute transport in streams is determined by the interaction of physical and chemical processes. Data from an injection experiment for chloride and several cations indicate significant influence of solutestreambed processes on transport in a mountain stream. These data are interpreted in terms of transient storage processes for all tracers and sorption processes for the cations. Process parameter values are estimated with simulations based on coupled quasi-two-dimensional transport and first-order mass transfer sorption. Comparative simulations demonstrate the relative roles of the physical and chemical processes in determining solute transport. During the first 24 hours of the experiment, chloride concentrations were attenuated relative to expected plateau levels. Additional attenuation occurred for the sorbing cation strontium. The simulations account for these storage processes. Parameter values determined by calibration compare favorably with estimates from other studies in mountain streams. Without further calibration, the transport of potassium and lithium is adequately simulated using parameters determined in the chloride-strontium simulation and with measured cation distribution coefficients.

  5. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE PAGES

    Bernstein, Diana N.; Neelin, J. David

    2016-04-28

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  6. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Diana N.; Neelin, J. David

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  7. Monte Carlo calculations of positron emitter yields in proton radiotherapy.

    PubMed

    Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F

    2012-03-21

    Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring. © 2012 Institute of Physics and Engineering in Medicine

  8. Evolution simulation of lightning discharge based on a magnetohydrodynamics method

    NASA Astrophysics Data System (ADS)

    Fusheng, WANG; Xiangteng, MA; Han, CHEN; Yao, ZHANG

    2018-07-01

    In order to solve the load problem for aircraft lightning strikes, lightning channel evolution is simulated under the key physical parameters for aircraft lightning current component C. A numerical model of the discharge channel is established, based on magnetohydrodynamics (MHD) and performed by FLUENT software. With the aid of user-defined functions and a user-defined scalar, the Lorentz force, Joule heating and material parameters of an air thermal plasma are added. A three-dimensional lightning arc channel is simulated and the arc evolution in space is obtained. The results show that the temperature distribution of the lightning channel is symmetrical and that the hottest region occurs at the center of the lightning channel. The distributions of potential and current density are obtained, showing that the difference in electric potential or energy between two points tends to make the arc channel develop downwards. The arc channel comes into expansion on the anode surface due to stagnation of the thermal plasma and there exists impingement on the copper plate when the arc channel comes into contact with the anode plate.

  9. A charge-based model of Junction Barrier Schottky rectifiers

    NASA Astrophysics Data System (ADS)

    Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali

    2018-06-01

    A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.

  10. Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis

    NASA Astrophysics Data System (ADS)

    Springer, Everett P.; Cundy, Terrance W.

    1987-02-01

    Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.

  11. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  12. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    NASA Astrophysics Data System (ADS)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  13. SED Modeling of 20 Massive Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In this paper, we present the spectral energy distributions (SEDs) modeling of twenty massive young stellar objects (MYSOs) and subsequently estimated different physical and structural/geometrical parameters for each of the twenty central YSO outflow candidates, along with their associated circumstellar disks and infalling envelopes. The SEDs for each of the MYSOs been reconstructed by using 2MASS, MSX, IRAS, IRAC & MIPS, SCUBA, WISE, SPIRE and IRAM data, with the help of a SED Fitting Tool, that uses a grid of 2D radiative transfer models. Using the detailed analysis of SEDs and subsequent estimation of physical and geometrical parameters for the central YSO sources along with its circumstellar disks and envelopes, the cumulative distribution of the stellar, disk and envelope parameters can be analyzed. This leads to a better understanding of massive star formation processes in their respective star forming regions in different molecular clouds.

  14. Contruction and physical parameters of multiscan whole-body scanner (in Czech)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silar, J.; Smidova, M.; Vacek, J.

    The construction of a commercial whole-body scanner which permits scanning in the form of a photographic picture, and the distribution in the human body of the activity of gamma emitters having an energy of up to 1.3 MeV, at relatively short intervals are described. The results are presented of the measurement of physical parameters affecting the scanning possibilities of a Model No. 602 Multiscan, produced by Cyclotron Corporation. The resulting radiometric parameters are listed. The results of measurement show that the device can be used in the whole-body scanning of the distribution of the activity of gamma emitters applied inmore » routine procedures, such as 100 mu Ci of /sup 85/ Sr, with a position resolution of 25 to 50 mm in a tissue layer in a height of up to 100 mm above the Multiscan table. (INIS)« less

  15. Interdisciplinary Applications of Autonomous Observation Systems

    DTIC Science & Technology

    2008-01-01

    analytical capabilities for describing the distributions and activities of marine microbes in relation to their physical, chemical and optical environment in...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY...a sensitive indicator of physiology (light acclimation status) and also a key parameter in models of primary productivity. We are now continuing

  16. A comparison of two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund

    NASA Astrophysics Data System (ADS)

    Luks, B.; Osuch, M.; Romanowicz, R. J.

    2012-04-01

    We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.

  17. Using Approximate Bayesian Computation to Probe Multiple Transiting Planet Systems

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.

    2015-08-01

    The large number of multiple transiting planet systems (MTPS) uncovered with Kepler suggest a population of well-aligned planetary systems. Previously, the distribution of transit duration ratios in MTPSs has been used to place constraints on the distributions of mutual orbital inclinations and orbital eccentricities in these systems. However, degeneracies with the underlying number of planets in these systems pose added challenges and make explicit likelihood functions intractable. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC proposes from a prior on the population parameters to produce synthetic datasets via a physically-motivated model. Samples are accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples then form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We will demonstrate the utility of ABC in exoplanet populations by presenting new constraints on the mutual inclination and eccentricity distributions in the Kepler MTPSs. We will also introduce Simple-ABC, a new open-source Python package designed for ease of use and rapid specification of general models, suitable for use in a wide variety of applications in both exoplanet science and astrophysics as a whole.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  19. Probabilistic accident consequence uncertainty analysis: Dispersion and deposition uncertainty assessment, appendices A and B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, F.T.; Young, M.L.; Miller, L.A.

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less

  20. Traveltime-based descriptions of transport and mixing in heterogeneous domains

    NASA Astrophysics Data System (ADS)

    Luo, Jian; Cirpka, Olaf A.

    2008-09-01

    Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.

  1. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  2. Tsallis entropy and complexity theory in the understanding of physics of precursory accelerating seismicity.

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Chatzopoulos, George

    2014-05-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds

  3. A physically-based Mie–Gruneisen equation of state to determine hot spot temperature distributions

    DOE PAGES

    Kittell, David Erik; Yarrington, Cole Davis

    2016-07-14

    Here, a physically-based form of the Mie–Grüneisen equation of state (EOS) is derived for calculating 1d planar shock temperatures, as well as hot spot temperature distributions from heterogeneous impact simulations. This form utilises a multi-term Einstein oscillator model for specific heat, and is completely algebraic in terms of temperature, volume, an integrating factor, and the cold curve energy. Moreover, any empirical relation for the reference pressure and energy may be substituted into the equations via the use of a generalised reference function. The complete EOS is then applied to calculations of the Hugoniot temperature and simulation of hydrodynamic pore collapsemore » using data for the secondary explosive, hexanitrostilbene (HNS). From these results, it is shown that the choice of EOS is even more significant for determining hot spot temperature distributions than planar shock states. The complete EOS is also compared to an alternative derivation assuming that specific heat is a function of temperature alone, i.e. cv(T). Temperature discrepancies on the order of 100–600 K were observed corresponding to the shock pressures required to initiate HNS (near 10 GPa). Overall, the results of this work will improve confidence in temperature predictions. By adopting this EOS, future work may be able to assign physical meaning to other thermally sensitive constitutive model parameters necessary to predict the shock initiation and detonation of heterogeneous explosives.« less

  4. Simulation of spatial and temporal properties of aftershocks by means of the fiber bundle model

    NASA Astrophysics Data System (ADS)

    Monterrubio-Velasco, Marisol; Zúñiga, F. R.; Márquez-Ramírez, Victor Hugo; Figueroa-Soto, Angel

    2017-11-01

    The rupture processes of any heterogeneous material constitute a complex physical problem. Earthquake aftershocks show temporal and spatial behaviors which are consequence of the heterogeneous stress distribution and multiple rupturing following the main shock. This process is difficult to model deterministically due to the number of parameters and physical conditions, which are largely unknown. In order to shed light on the minimum requirements for the generation of aftershock clusters, in this study, we perform a simulation of the main features of such a complex process by means of a fiber bundle (FB) type model. The FB model has been widely used to analyze the fracture process in heterogeneous materials. It is a simple but powerful tool that allows modeling the main characteristics of a medium such as the brittle shallow crust of the earth. In this work, we incorporate spatial properties, such as the Coulomb stress change pattern, which help simulate observed characteristics of aftershock sequences. In particular, we introduce a parameter ( P) that controls the probability of spatial distribution of initial loads. Also, we use a "conservation" parameter ( π), which accounts for the load dissipation of the system, and demonstrate its influence on the simulated spatio-temporal patterns. Based on numerical results, we find that P has to be in the range 0.06 < P < 0.30, whilst π needs to be limited by a very narrow range ( 0.60 < π < 0.66) in order to reproduce aftershocks pattern characteristics which resemble those of observed sequences. This means that the system requires a small difference in the spatial distribution of initial stress, and a very particular fraction of load transfer in order to generate realistic aftershocks.

  5. Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp; Yamaguchi, Hajime; Kizaki, Hisao

    2015-07-15

    Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV,more » spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.« less

  6. A frequency quantum interpretation of the surface renewal model of mass transfer

    PubMed Central

    Mondal, Chanchal

    2017-01-01

    The surface of a turbulent liquid is visualized as consisting of a large number of chaotic eddies or liquid elements. Assuming that surface elements of a particular age have renewal frequencies that are integral multiples of a fundamental frequency quantum, and further assuming that the renewal frequency distribution is of the Boltzmann type, performing a population balance for these elements leads to the Danckwerts surface age distribution. The basic quantum is what has been traditionally called the rate of surface renewal. The Higbie surface age distribution follows if the renewal frequency distribution of such elements is assumed to be continuous. Four age distributions, which reflect different start-up conditions of the absorption process, are then used to analyse transient physical gas absorption into a large volume of liquid, assuming negligible gas-side mass-transfer resistance. The first two are different versions of the Danckwerts model, the third one is based on the uniform and Higbie distributions, while the fourth one is a mixed distribution. For the four cases, theoretical expressions are derived for the rates of gas absorption and dissolved-gas transfer to the bulk liquid. Under transient conditions, these two rates are not equal and have an inverse relationship. However, with the progress of absorption towards steady state, they approach one another. Assuming steady-state conditions, the conventional one-parameter Danckwerts age distribution is generalized to a two-parameter age distribution. Like the two-parameter logarithmic normal distribution, this distribution can also capture the bell-shaped nature of the distribution of the ages of surface elements observed experimentally in air–sea gas and heat exchange. Estimates of the liquid-side mass-transfer coefficient made using these two distributions for the absorption of hydrogen and oxygen in water are very close to one another and are comparable to experimental values reported in the literature. PMID:28791137

  7. Resolving structural influences on water-retention properties of alluvial deposits

    USGS Publications Warehouse

    Winfield, K.A.; Nimmo, J.R.; Izbicki, J.A.; Martin, P.M.

    2006-01-01

    With the goal of improving property-transfer model (PTM) predictions of unsaturated hydraulic properties, we investigated the influence of sedimentary structure, defined as particle arrangement during deposition, on laboratory-measured water retention (water content vs. potential [??(??)]) of 10 undisturbed core samples from alluvial deposits in the western Mojave Desert, California. The samples were classified as having fluvial or debris-flow structure based on observed stratification and measured spread of particle-size distribution. The ??(??) data were fit with the Rossi-Nimmo junction model, representing water retention with three parameters: the maximum water content (??max), the ??-scaling parameter (??o), and the shape parameter (??). We examined trends between these hydraulic parameters and bulk physical properties, both textural - geometric mean, Mg, and geometric standard deviation, ??g, of particle diameter - and structural - bulk density, ??b, the fraction of unfilled pore space at natural saturation, Ae, and porosity-based randomness index, ??s, defined as the excess of total porosity over 0.3. Structural parameters ??s and Ae were greater for fluvial samples, indicating greater structural pore space and a possibly broader pore-size distribution associated with a more systematic arrangement of particles. Multiple linear regression analysis and Mallow's Cp statistic identified combinations of textural and structural parameters for the most useful predictive models: for ??max, including Ae, ??s, and ??g, and for both ??o and ??, including only textural parameters, although use of Ae can somewhat improve ??o predictions. Textural properties can explain most of the sample-to-sample variation in ??(??) independent of deposit type, but inclusion of the simple structural indicators Ae and ??s can improve PTM predictions, especially for the wettest part of the ??(??) curve. ?? Soil Science Society of America.

  8. Statistical classifiers on multifractal parameters for optical diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Kumar, Rajeev; Krishnamoorthy, Vigneshram; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-06-01

    An augmented set of multifractal parameters with physical interpretations have been proposed to quantify the varying distribution and shape of the multifractal spectrum. The statistical classifier with accuracy of 84.17% validates the adequacy of multi-feature MFDFA characterization of elastic scattering spectroscopy for optical diagnosis of cancer.

  9. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  10. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    PubMed

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  11. Total Water-Vapor Distribution in the Summer Cloudless Atmosphere over the South of Western Siberia

    NASA Astrophysics Data System (ADS)

    Troshkin, D. N.; Bezuglova, N. N.; Kabanov, M. V.; Pavlov, V. E.; Sokolov, K. I.; Sukovatov, K. Yu.

    2017-12-01

    The spatial distribution of the total water vapor in different climatic zones of the south of Western Siberia in summer of 2008-2011 is studied on the basis of Envisat data. The correlation analysis of the water-vapor time series from the Envisat data W and radiosonde observations w for the territory of Omsk aerological station show that the absolute values of W and w are linearly correlated with a coefficient of 0.77 (significance level p < 0.05). The distribution functions of the total water vapor are calculated based on the number of its measurements by Envisat for a cloudless sky of three zones with different physical properties of the underlying surface, in particular, steppes to the south of the Vasyugan Swamp and forests to the northeast of the Swamp. The distribution functions are bimodal; each mode follows the lognormal law. The parameters of these functions are given.

  12. Barkhausen noise in FeCoB amorphous alloys (abstract)

    NASA Astrophysics Data System (ADS)

    Durin, G.; Bertotti, G.

    1996-04-01

    In recent years, the Barkhausen effect has been indicated as a promising tool to investigate and verify the ideas about the self-organization of physical complex systems displaying power law distributions and 1/f noise. When measured at low magnetization rates, the Barkhausen signal displays 1/fα-type spectra (with α=1.5÷2) and power law distributions of duration and size of the Barkhausen jumps. These experimental data are quite well described by the model of Alessandro et al. which is based on a stochastic description of the domain wall dynamics over a pinning field with brownian properties. Yet, this model always predicts a 1/f 2 spectrum, and, at the moment, it is not clear if it can take into account possible effects of self-organization of the magnetization process. In order to improve the power of the model and clarify this problem, we have performed a thorough investigation of the noise spectra and the amplitude distributions of a wide set of FeCoB amorphous alloys. The stationary amplitude distribution of the signal is very well fitted by the gamma distribution P(ν)=νc-1 exp(-ν)/Γ(c), where ν is proportional to the domain wall velocity, and c is a dimensionless parameter. As predicted in Ref. , this parameter is found to have a parabolic dependence on the magnetization rate. In particular, the linear coefficient is related to the amplitude of the fluctuations of the pinning field, a parameter which can be measured directly from the power spectra. In all measured cases, the power spectra show α exponents less than 2, and thus poorly fitted by the model. Actually, the absolute value of the high frequency spectral density is not consistent with the c parameter determined from the amplitude distribution data. This discrepancy requires to introduce effects not taken into account in the model, as the propagation of the jumps along the domain wall. This highly enhances the fit of the data and indicates effects of propagation on the scale of a few millimeters. These results are analyzed in terms of new descriptions of the statistical properties of the pinning field based on fractional brownian processes.

  13. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.

  14. VOSA: SED building and analysis of thousands of stars in the framework of Gaia

    NASA Astrophysics Data System (ADS)

    Rodrigo, C.; Solano, E.; Bayo, A.

    2014-07-01

    VOSA (http://svo2.cab.inta-csic.es/theory/vosa/), is a web-based tool designed to combine private photometric measurements with data available in VO services distributed worldwide to build the observational spectral energy distributions (SEDs) of hundreds of objects. VOSA also accesses various collections of models to simulate the equivalent theoretical SEDs, allows the user to decide the range of physical parameters to explore, performs the SED comparison, provides the best fitting models to the user following two different approaches (chi square and Bayesian fitting), and, for stellar sources, compares these parameters with isochrones and evolutionary tracks to estimate masses and ages. In particular, VOSA offers the advantage of deriving physical parameters using all the available photometric information instead of a restricted subset of colors. VOSA was firstly released in 2008 and its functionalities are described in Bayo et al. (2008). At the time of writing there are more than 300 active users in VOSA who have published more than 60 refereed papers. In the framework of the GENIUS (https://gaia.am.ub.es/Twiki/bin/view/GENIUS) project we are upgrading VOSA to, on one hand, provide a seamless access to Gaia data and, on the other hand, handle thousands of objects at a time. In this poster, the main functionalities to be implemented in the Gaia context will be described. The poster can be found at: http://svo.cab.inta-csic.es/files/svo//Public/SVOPapers/posters/vosa-poster3.pdf.

  15. Detection of cancerous cervical cells using physical adhesion of fluorescent silica particles and centripetal force

    PubMed Central

    Gaikwad, Ravi M.; Dokukin, Maxim E.; Iyer, K. Swaminathan; Woodworth, Craig D.; Volkov, Dmytro O.; Sokolov, Igor

    2012-01-01

    Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical interaction between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. PMID:21305062

  16. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  17. A Vertical Census of Precipitation Characteristics using Ground-based Dual-polarimetric Radar Data

    NASA Astrophysics Data System (ADS)

    Wolff, D. B.; Petersen, W. A.; Marks, D. A.; Pippitt, J. L.; Tokay, A.; Gatlin, P. N.

    2017-12-01

    Characterization of the vertical structure/variability of precipitation and resultant microphysics is critical in providing physical validation of space-based precipitation retrievals. In support of NASAs Global Precipitation Measurement (GPM) mission Ground Validation (GV) program, NASA has invested in a state-of-art dual-polarimetric radar known as NPOL. NPOL is routinely deployed on the Delmarva Peninsula in support of NASAs GPM Precipitation Research Facility (PRF). NPOL has also served as the backbone of several GPM field campaigns in Oklahoma, Iowa, South Carolina and most recently in the Olympic Mountains in Washington state. When precipitation is present, NPOL obtains very high-resolution vertical profiles of radar observations (e.g. reflectivity (ZH) and differential reflectivity (ZDR)), from which important particle size distribution parameters are retrieved such as the mass-weight mean diameter (Dm) and the intercept parameter (Nw). These data are then averaged horizontally to match the nadir resolution of the dual-frequency radar (DPR; 5 km) on board the GPM satellite. The GPM DPR, Combined, and radiometer algorithms (such as GPROF) rely on functional relationships built from assumed parametric relationships and/or retrieved parameter profiles and spatial distributions of particle size (PSD), water content, and hydrometeor phase within a given sample volume. Thus, the NPOL-retrieved profiles provide an excellent tool for characterization of the vertical profile structure and variability during GPM overpasses. In this study, we will use many such overpass comparisons to quantify an estimate of the true sub-IFOV variability as a function of hydrometeor and rain type (convective or stratiform). This presentation will discuss the development of a relational database to help provide a census of the vertical structure of precipitation via analysis and correlation of reflectivity, differential reflectivity, mean-weight drop diameter and the normalized intercept parameter of the gamma drop size distribution.

  18. Techniques for determining partial size distribution of particulate matter: Laser diffraction versus electrical sensing zone

    USDA-ARS?s Scientific Manuscript database

    The study of health impacts, emission estimation of particulate matter (PM), and development of new control technologies require knowledge of PM characteristics. Among these PM characteristics, the particle size distribution (PSD) is perhaps the most important physical parameter governing particle b...

  19. Techniques for measuring particle size distribution of particulate matter emitted from animal feeding operations

    USDA-ARS?s Scientific Manuscript database

    Particle size distribution (PSD) is perhaps the most important physical parameter governing the airborne particle behavior. Various methods and techniques are available for conducting PSD analyses. Unfortunately, there is no single agreed upon method to determine the PSDs of particulate matter (PM) ...

  20. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  1. Regional Assessment of Storm-triggered Shall Landslide Risks using the SLIDE (SLope-Infiltration-Distributed Equilibrium) Model

    NASA Astrophysics Data System (ADS)

    Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.

    2011-12-01

    The key to advancing the predictability of rainfall-triggered landslides is to use physically based slope-stability models that simulate the dynamical response of the subsurface moisture to spatiotemporal variability of rainfall in complex terrains. An early warning system applying such physical models has been developed to predict rainfall-induced shallow landslides over Java Island in Indonesia and Honduras. The prototyped early warning system integrates three major components: (1) a susceptibility mapping or hotspot identification component based on a land surface geospatial database (topographical information, maps of soil properties, and local landslide inventory etc.); (2) a satellite-based precipitation monitoring system (http://trmm.gsfc.nasa.gov) and a precipitation forecasting model (i.e. Weather Research Forecast); and (3) a physically-based, rainfall-induced landslide prediction model SLIDE (SLope-Infiltration-Distributed Equilibrium). The system utilizes the modified physical model to calculate a Factor of Safety (FS) that accounts for the contribution of rainfall infiltration and partial saturation to the shear strength of the soil in topographically complex terrains. The system's prediction performance has been evaluated using a local landslide inventory. In Java Island, Indonesia, evaluation of SLIDE modeling results by local news reports shows that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events. Further study of SLIDE is implemented in Honduras where Hurricane Mitch triggered widespread landslides in 1998. Results shows within the approximately 1,200 square kilometers study areas, the values of hit rates reached as high as 78% and 75%, while the error indices were 35% and 49%. Despite positive model performance, the SLIDE model is limited in the early warning system by several assumptions including, using general parameter calibration rather than in situ tests and neglecting geologic information. Advantages and limitations of this model will be discussed with respect to future applications of landslide assessment and prediction over large scales. In conclusion, integration of spatially distributed remote sensing precipitation products and in-situ datasets and physical models in this prototype system enable us to further develop a regional early warning tool in the future for forecasting storm-induced landslides.

  2. Predicting nonstationary flood frequencies: Evidence supports an updated stationarity thesis in the United States

    NASA Astrophysics Data System (ADS)

    Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.

    2017-07-01

    Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.

  3. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE PAGES

    Izacard, Olivier

    2016-08-02

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  4. Max Planck and the birth of the quantum hypothesis

    NASA Astrophysics Data System (ADS)

    Nauenberg, Michael

    2016-09-01

    Based on the functional dependence of entropy on energy, and on Wien's distribution for black-body radiation, Max Planck obtained a formula for this radiation by an interpolation relation that fitted the experimental measurements of thermal radiation at the Physikalisch Technishe Reichanstalt (PTR) in Berlin in the late 19th century. Surprisingly, his purely phenomenological result turned out to be not just an approximation, as would have been expected, but an exact relation. To obtain a physical interpretation for his formula, Planck then turned to Boltzmann's 1877 paper on the statistical interpretation of entropy, which led him to introduce the fundamental concept of energy discreteness into physics. A novel aspect of our account that has been missed in previous historical studies of Planck's discovery is to show that Planck could have found his phenomenological formula partially derived in Boltzmann's paper in terms of a variational parameter. But the dependence of this parameter on temperature is not contained in this paper, and it was first derived by Planck.

  5. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  6. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2010-09-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinoza, I; Peschke, P; Karger, C

    Purpose: In radiotherapy, it is important to predict the response of tumour to irradiation prior to the treatment. Mathematical modelling of tumour control probability (TCP) based on the dose distribution, medical imaging and other biological information may help to improve this prediction and to optimize the treatment plan. The aim of this work is to develop an image based 3D multiscale radiobiological model, which describes the growth and the response to radiotherapy of hypoxic tumors. Methods: The computer model is based on voxels, containing tumour, normal (including capillary) and dead cells. Killing of tumour cells due to irradiation is calculatedmore » by the Linear Quadratic Model (extended for hypoxia), and the proliferation and resorption of cells are modelled by exponential laws. The initial shape of the tumours is taken from CT images and the initial vascular and cell density information from PET and/or MR images. Including the fractionation regime and the physical dose distribution of the radiation treatment, the model simulates the spatial-temporal evolution of the tumor. Additionally, the dose distribution may be biologically optimized. Results: The model describes the appearance of hypoxia during tumour growth and the reoxygenation processes during radiotherapy. Among other parameters, the TCP is calculated for different dose distributions. The results are in accordance with published results. Conclusion: The simulation model may contribute to the understanding of the influence of biological parameters on tumor response during treatment, and specifically on TCP. It may be used to implement dose-painting approaches. Experimental and clinical validation is needed. This study is supported by a grant from the Ministry of Education of Chile, Programa Mece Educacion Superior (2)« less

  8. Exploring the role of movement in determining the global distribution of marine biomass using a coupled hydrodynamic - Size-based ecosystem model

    NASA Astrophysics Data System (ADS)

    Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.

    2015-11-01

    Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.

  9. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  10. Physical and geometrical parameters of VCBS XIII: HIP 105947

    NASA Astrophysics Data System (ADS)

    Gumaan Masda, Suhail; Al-Wardat, Mashhoor Ahmed; Pathan, Jiyaulla Khan Moula Khan

    2018-06-01

    The best physical and geometrical parameters of the main sequence close visual binary system (CVBS), HIP 105947, are presented. These parameters have been constructed conclusively using Al-Wardat’s complex method for analyzing CVBSs, which is a method for constructing a synthetic spectral energy distribution (SED) for the entire binary system using individual SEDs for each component star. The model atmospheres are in its turn built using the Kurucz (ATLAS9) line-blanketed plane-parallel models. At the same time, the orbital parameters for the system are calculated using Tokovinin’s dynamical method for constructing the best orbits of an interferometric binary system. Moreover, the mass-sum of the components, as well as the Δθ and Δρ residuals for the system, is introduced. The combination of Al-Wardat’s and Tokovinin’s methods yields the best estimations of the physical and geometrical parameters. The positions of the components in the system on the evolutionary tracks and isochrones are plotted and the formation and evolution of the system are discussed.

  11. Sensitivity of the normalized difference vegetation index to subpixel canopy cover, soil albedo, and pixel scale

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.

    1990-01-01

    An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.

  12. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.

  13. Electron Impact Multiple Ionization Cross Sections for Solar Physics

    NASA Astrophysics Data System (ADS)

    Hahn, M.; Savin, D. W.; Mueller, A.

    2017-12-01

    We have compiled a set of electron-impact multiple ionization (EIMI) cross sections for astrophysically relevant ions. EIMI can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature, as in solar flares or in nanoflare coronal heating. EIMI is also likely to be significant when the electron energy distribution is non-thermal, such as if the electrons follow a kappa distribution. Cross sections for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to the available experimental EIMI cross section data. Based on this comparison, we have interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for thousands of EIMI cross sections. We also highlight some outstanding issues that remain to be resolved.

  14. Spectral BRDF-based determination of proper measurement geometries to characterize color shift of special effect coatings.

    PubMed

    Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa

    2013-02-01

    A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.

  15. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  16. A glacier runoff extension to the Precipitation Runoff Modeling System

    Treesearch

    A. E. Van Beusekom; R. J. Viger

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...

  17. The distribution of density in supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Hopkins, Philip F.

    2017-11-01

    We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.

  18. Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data

    NASA Astrophysics Data System (ADS)

    Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan

    2010-05-01

    There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.

  19. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  20. Monitoring and modeling as a continuing learning process: the use of hydrological models in a general probabilistic framework.

    NASA Astrophysics Data System (ADS)

    Baroni, G.; Gräff, T.; Reinstorf, F.; Oswald, S. E.

    2012-04-01

    Nowadays uncertainty and sensitivity analysis are considered basic tools for the assessment of hydrological models and the evaluation of the most important sources of uncertainty. In this context, in the last decades several methods have been developed and applied in different hydrological conditions. However, in most of the cases, the studies have been done by investigating mainly the influence of the parameter uncertainty on the simulated outputs and few approaches tried to consider also other sources of uncertainty i.e. input and model structure. Moreover, several constrains arise when spatially distributed parameters are involved. To overcome these limitations a general probabilistic framework based on Monte Carlo simulations and the Sobol method has been proposed. In this study, the general probabilistic framework was applied at field scale using a 1D physical-based hydrological model (SWAP). Furthermore, the framework was extended at catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are applied in two different experimental sites in Germany: a relatively flat cropped field close to Potsdam (Brandenburg) and a small mountainous catchment with agricultural land use (Schaefertal, Harz Mountains). For both cases, input and parameters are considered as major sources of uncertainty. Evaluation of the models was based on soil moisture detected at plot scale in different depths and, for the catchment site, also with daily discharge values. The study shows how the framework can take into account all the various sources of uncertainty i.e. input data, parameters (either in scalar or spatially distributed form) and model structures. The framework can be used in a loop in order to optimize further monitoring activities used to improve the performance of the model. In the particular applications, the results show how the sources of uncertainty are specific for each process considered. The influence of the input data as well as the presence of compensating errors become clear by the different processes simulated.

  1. Sensitivity analysis with the regional climate model COSMO-CLM over the CORDEX-MENA domain

    NASA Astrophysics Data System (ADS)

    Bucchignani, E.; Cattaneo, L.; Panitz, H.-J.; Mercogliano, P.

    2016-02-01

    The results of a sensitivity work based on ERA-Interim driven COSMO-CLM simulations over the Middle East-North Africa (CORDEX-MENA) domain are presented. All simulations were performed at 0.44° spatial resolution. The purpose of this study was to ascertain model performances with respect to changes in physical and tuning parameters which are mainly related to surface, convection, radiation and cloud parameterizations. Evaluation was performed for the whole CORDEX-MENA region and six sub-regions, comparing a set of 26 COSMO-CLM runs against a combination of available ground observations, satellite products and reanalysis data to assess temperature, precipitation, cloud cover and mean sea level pressure. The model proved to be very sensitive to changes in physical parameters. The optimized configuration allows COSMO-CLM to improve the simulated main climate features of this area. Its main characteristics consist in the new parameterization of albedo, based on Moderate Resolution Imaging Spectroradiometer data, and the new parameterization of aerosol, based on NASA-GISS AOD distributions. When applying this configuration, Mean Absolute Error values for the considered variables are as follows: about 1.2 °C for temperature, about 15 mm/month for precipitation, about 9 % for total cloud cover, and about 0.6 hPa for mean sea level pressure.

  2. Physics textbooks from the viewpoint of network structures

    NASA Astrophysics Data System (ADS)

    Králiková, Petra; Teleki, Aba

    2017-01-01

    We can observe self-organized networks all around us. These networks are, in general, scale invariant networks described by the Bianconi-Barabasi model. The self-organized networks (networks formed naturally when feedback acts on the system) show certain universality. These networks, in simplified models, have scale invariant distribution (Pareto distribution type I) and parameter α has value between 2 and 5. The textbooks are extremely important in the learning process and from this reason we studied physics textbook at the level of sentences and physics terms (bipartite network). The nodes represent physics terms, sentences, and pictures, tables, connected by links (by physics terms and transitional words and transitional phrases). We suppose that learning process are more robust and goes faster and easier if the physics textbook has a structure similar to structures of self-organized networks.

  3. Social judgment theory based model on opinion formation, polarization and evolution

    NASA Astrophysics Data System (ADS)

    Chau, H. F.; Wong, C. Y.; Chow, F. K.; Fung, Chi-Hang Fred

    2014-12-01

    The dynamical origin of opinion polarization in the real world is an interesting topic that physical scientists may help to understand. To properly model the dynamics, the theory must be fully compatible with findings by social psychologists on microscopic opinion change. Here we introduce a generic model of opinion formation with homogeneous agents based on the well-known social judgment theory in social psychology by extending a similar model proposed by Jager and Amblard. The agents’ opinions will eventually cluster around extreme and/or moderate opinions forming three phases in a two-dimensional parameter space that describes the microscopic opinion response of the agents. The dynamics of this model can be qualitatively understood by mean-field analysis. More importantly, first-order phase transition in opinion distribution is observed by evolving the system under a slow change in the system parameters, showing that punctuated equilibria in public opinion can occur even in a fully connected social network.

  4. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  5. Data catalog for JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC)

    NASA Technical Reports Server (NTRS)

    Digby, Susan

    1995-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) archive at the Jet Propulsion Laboratory contains satellite data sets and ancillary in-situ data for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Geophysical parameters available from the archive include sea-surface height, surface-wind vector, surface-wind speed, surface-wind stress vector, sea-surface temperature, atmospheric liquid water, integrated water vapor, phytoplankton pigment concentration, heat flux, and in-situ data. PO.DAAC is an element of the Earth Observing System Data and Information System and is the United States distribution site for TOPEX/POSEIDON data and metadata.

  6. Generation Mechanism of Nonlinear Rayleigh Surface Waves for Randomly Distributed Surface Micro-Cracks.

    PubMed

    Ding, Xiangyan; Li, Feilong; Zhao, Youxuan; Xu, Yongmei; Hu, Ning; Cao, Peng; Deng, Mingxi

    2018-04-23

    This paper investigates the propagation of Rayleigh surface waves in structures with randomly distributed surface micro-cracks using numerical simulations. The results revealed a significant ultrasonic nonlinear effect caused by the surface micro-cracks, which is mainly represented by a second harmonic with even more distinct third/quadruple harmonics. Based on statistical analysis from the numerous results of random micro-crack models, it is clearly found that the acoustic nonlinear parameter increases linearly with micro-crack density, the proportion of surface cracks, the size of micro-crack zone, and the excitation frequency. This study theoretically reveals that nonlinear Rayleigh surface waves are feasible for use in quantitatively identifying the physical characteristics of surface micro-cracks in structures.

  7. Generation Mechanism of Nonlinear Rayleigh Surface Waves for Randomly Distributed Surface Micro-Cracks

    PubMed Central

    Ding, Xiangyan; Li, Feilong; Xu, Yongmei; Cao, Peng; Deng, Mingxi

    2018-01-01

    This paper investigates the propagation of Rayleigh surface waves in structures with randomly distributed surface micro-cracks using numerical simulations. The results revealed a significant ultrasonic nonlinear effect caused by the surface micro-cracks, which is mainly represented by a second harmonic with even more distinct third/quadruple harmonics. Based on statistical analysis from the numerous results of random micro-crack models, it is clearly found that the acoustic nonlinear parameter increases linearly with micro-crack density, the proportion of surface cracks, the size of micro-crack zone, and the excitation frequency. This study theoretically reveals that nonlinear Rayleigh surface waves are feasible for use in quantitatively identifying the physical characteristics of surface micro-cracks in structures. PMID:29690580

  8. Progress in Application of Generalized Wigner Distribution to Growth and Other Problems

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto; Gonzalez, Diego Luis

    We recap the use of the (single-parameter) Generalized Wigner Distribution (GWD) to analyze capture-zone distributions associated with submonolayer epitaxial growth. We discuss recent applications to physical systems, as well as key simulations. We pay particular attention to how this method compares with other methods to assess the critical nucleus size characterizing growth. The following talk discusses a particular case when special insight is needed to reconcile the various methods. We discuss improvements that can be achieved by going to a 2-parameter fragmentation approach. At a much larger scale we have applied this approach to various distributions in socio-political phenomena (areas of secondary administrative units [e.g., counties] and distributions of subway stations). Work at UMD supported by NSF CHE 13-05892.

  9. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  10. Toward a closer integration of magnetospheric research: Magnetospheric currents inferred from ground-based magnetic data

    NASA Astrophysics Data System (ADS)

    Akasofu, S.-I.; Kamide, Y.

    1998-07-01

    A new approach is needed to advance magnetospheric physics in the future to achieve a much closer integration than in the past among satellite-based researchers, ground-based researchers, and theorists/modelers. Specifically, we must find efficient ways to combine two-dimensional ground-based data and single points satellite-based data to infer three-dimensional aspects of magnetospheric disturbances. For this particular integration purpose, we propose a new project. It is designed to determine the currents on the magnetospheric equatorial plane from the ionospheric current distribution which has become available by inverting ground-based magnetic data from an extensive, systematic network of observations, combined with ground-based radar measurements of ionospheric parameters, and satellite observations of auroras, electric fields, and currents. The inversion method is based on the KRM/AMIE algorithms. In the first part of the paper, we extensively review the reliability and accuracy of the KRM and AMIE algorithms and conclude that the ionospheric quantities thus obtained are accurate enough for the next step. In the second part, the ionospheric current distribution thus obtained is projected onto the equatorial plane. This process requires a close cooperation with modelers in determining an accurate configuration of the magnetospheric field lines. If we succeed in this projection, we should be able to study the changing distribution of the currents in a vast region of the magnetospheric equatorial plane for extended periods with a time resolution of about 5 min. This process requires a model of the magnetosphere for the different phases of the magnetospheric substorm. Satellite-based observations are needed to calibrate the projection results. Agreements and disagreements thus obtained will be crucial for theoretical studies of magnetospheric plasma convection and dynamics, particularly in studying substorms. Nothing is easy in these procedures. However, unless we can overcome the associated difficulties, we may not be able to make distinct progresses. We believe that the proposed project is one way to draw the three groups closer together in advancing magnetospheric physics in the future. It is important to note that the proposed project has become possible because ground-based space physics has made a major advance during the last decade.

  11. Experimental study and simulation of space charge stimulated discharge

    NASA Astrophysics Data System (ADS)

    Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.

    2002-11-01

    The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.

  12. PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Xie Zongxia; Hu Qinghua

    The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in themore » five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I{sub 0}.« less

  13. TEA CO 2 Laser Simulator: A software tool to predict the output pulse characteristics of TEA CO 2 laser

    NASA Astrophysics Data System (ADS)

    Abdul Ghani, B.

    2005-09-01

    "TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.

  14. DASTCOM5: A Portable and Current Database of Asteroid and Comet Orbit Solutions

    NASA Astrophysics Data System (ADS)

    Giorgini, Jon D.; Chamberlin, Alan B.

    2014-11-01

    A portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, with the software to access it, is available for download (ftp://ssd.jpl.nasa.gov/pub/xfr/dastcom5.zip; unzip -ao dastcom5.zip). DASTCOM5 contains the latest heliocentric IAU76/J2000 ecliptic osculating orbital elements for all known asteroids and comets as determined by a least-squares best-fit to ground-based optical, spacecraft, and radar astrometric measurements. Other physical, dynamical, and covariance parameters are included when known. A total of 142 parameters per object are supported within DASTCOM5. This information is suitable for initializing high-precision numerical integrations, assessing orbit geometry, computing trajectory uncertainties, visual magnitude, and summarizing physical characteristics of the body. The DASTCOM5 distribution is updated as often as hourly to include newly discovered objects or orbit solution updates. It includes an ASCII index of objects that supports look-ups based on name, current or past designation, SPK ID, MPC packed-designations, or record number. DASTCOM5 is the database used by the NASA/JPL Horizons ephemeris system. It is a subset exported from a larger MySQL-based relational Small-Body Database ("SBDB") maintained at JPL. The DASTCOM5 distribution is intended for programmers comfortable with UNIX/LINUX/MacOSX command-line usage who need to develop stand-alone applications. The goal of the implementation is to provide small, fast, portable, and flexibly programmatic access to JPL comet and asteroid orbit solutions. The supplied software library, examples, and application programs have been verified under gfortran, Lahey, Intel, and Sun 32/64-bit Linux/UNIX FORTRAN compilers. A command-line tool ("dxlook") is provided to enable database access from shell or script environments.

  15. Evaluation of water quality and stability in the drinking water distribution network in the Azogues city, Ecuador.

    PubMed

    García-Ávila, Fernando; Ramos-Fernández, Lía; Pauta, Damián; Quezada, Diego

    2018-06-01

    This document presents the physical-chemical parameters with the objective of evaluating and analyzing the drinking water quality in the Azogues city applying the water quality index (WQI) and to research the water stability in the distribution network using corrosion indexes. Thirty samples were collected monthly for six months throughout the drinking water distribution network; turbidity, temperature, electric conductivity, pH, total dissolved solids, total hardness, calcium, magnesium, alkalinity, chlorides, nitrates, sulfates and phosphates were determined; the physical-chemical parameters were measured using standard methods. The processed data revealed that the average values ​​of LSI, RSI and PSI were 0.5 (±0.34), 6.76 (±0.6), 6.50 (±0.99) respectively. The WQI calculation indicated that 100% of the samples are considered excellent quality water. According to the Langelier, Ryznar and Pukorius indexes showed that drinking water in Azogues is corrosive. The quality of drinking water according to the WQI is in a good and excellent category.

  16. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  17. Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems.

    PubMed

    Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias

    2014-01-01

    In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.

  18. Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias

    2014-02-01

    In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.

  19. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  20. Using large hydrological datasets to create a robust, physically based, spatially distributed model for Great Britain

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley

    2014-05-01

    The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.

  1. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  2. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  3. Distributed sensor architecture for intelligent control that supports quality of control and quality of service.

    PubMed

    Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés

    2015-02-25

    This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.

  4. Distributed Sensor Architecture for Intelligent Control that Supports Quality of Control and Quality of Service

    PubMed Central

    Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés

    2015-01-01

    This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145

  5. Spatio-temporal analysis of aftershock sequences in terms of Non Extensive Statistical Physics.

    NASA Astrophysics Data System (ADS)

    Chochlaki, Kalliopi; Vallianatos, Filippos

    2017-04-01

    Earth's seismicity is considered as an extremely complicated process where long-range interactions and fracturing exist (Vallianatos et al., 2016). For this reason, in order to analyze it, we use an innovative methodological approach, introduced by Tsallis (Tsallis, 1988; 2009), named Non Extensive Statistical Physics. This approach introduce a generalization of the Boltzmann-Gibbs statistical mechanics and it is based on the definition of Tsallis entropy Sq, which maximized leads the the so-called q-exponential function that expresses the probability distribution function that maximizes the Sq. In the present work, we utilize the concept of Non Extensive Statistical Physics in order to analyze the spatiotemporal properties of several aftershock series. Marekova (Marekova, 2014) suggested that the probability densities of the inter-event distances between successive aftershocks follow a beta distribution. Using the same data set we analyze the inter-event distance distribution of several aftershocks sequences in different geographic regions by calculating non extensive parameters that determine the behavior of the system and by fitting the q-exponential function, which expresses the degree of non-extentivity of the investigated system. Furthermore, the inter-event times distribution of the aftershocks as well as the frequency-magnitude distribution has been analyzed. The results supports the applicability of Non Extensive Statistical Physics ideas in aftershock sequences where a strong correlation exists along with memory effects. References C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479-487. doi:10.1007/BF01016429 C. Tsallis, Introduction to nonextensive statistical mechanics: Approaching a complex world, 2009. doi:10.1007/978-0-387-85359-8. E. Marekova, Analysis of the spatial distribution between successive earthquakes in aftershocks series, Annals of Geophysics, 57, 5, doi:10.4401/ag-6556, 2014 F. Vallianatos, G. Papadakis, G. Michas, Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497, 2016.

  6. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  7. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  8. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  9. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  10. Physics-based Control-oriented Modeling of the Current Profile Evolution in NSTX-Upgrade

    NASA Astrophysics Data System (ADS)

    Ilhan, Zeki; Barton, Justin; Shi, Wenyu; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan

    2013-10-01

    The operational goals for the NSTX-Upgrade device include non-inductive sustainment of high- β plasmas, realization of the high performance equilibrium scenarios with neutral beam heating, and achievement of longer pulse durations. Active feedback control of the current profile is proposed to enable these goals. Motivated by the coupled, nonlinear, multivariable, distributed-parameter plasma dynamics, the first step towards feedback control design is the development of a physics-based, control-oriented model for the current profile evolution in response to non-inductive current drives and heating systems. For this purpose, the nonlinear magnetic-diffusion equation is coupled with empirical models for the electron density, electron temperature, and non-inductive current drives (neutral beams). The resulting first-principles-driven, control-oriented model is tailored for NSTX-U based on the PTRANSP predictions. Main objectives and possible challenges associated with the use of the developed model for control design are discussed. This work was supported by PPPL.

  11. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.

  12. Increasing precision of turbidity-based suspended sediment concentration and load estimates.

    PubMed

    Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E

    2010-01-01

    Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.

  13. Origins and properties of kappa distributions in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, George

    2016-07-01

    Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle systems, such as the space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by the so-called kappa distributions. Empirical kappa distributions have become increasingly widespread across space and plasma physics. However, a breakthrough in the field came with the connection of kappa distributions to the solid statistical framework of Tsallis non-extensive statistical mechanics. Understanding the statistical origin of kappa distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i) The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of kappa distributions; (iii) the phase-space kappa distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for kappa distributions, and (v) the new quantization constant, h _{*}˜10 ^{-22} Js.

  14. Effects of Raindrop Shape Parameter on the Simulation of Plum Rains

    NASA Astrophysics Data System (ADS)

    Mei, H.; Zhou, L.; Li, X.; Huang, X.; Guo, W.

    2017-12-01

    The raindrop shape parameter of particle distribution is generally set as constant in a Double-moment Bulk Microphysics Scheme (DBMS) using Gama distribution function though which suggest huge differences in time and space according to observations. Based on Milbrandt 2-mon(MY) DBMS, four cases during Plum Rains season are simulated coupled with four empirical relationships between shape parameter (μr) and slope parameter of raindrop which have been concluded from observations of raindrop distributions. The analysis of model results suggest that μr have some influences on rainfall. Introducing the diagnostic formulas of μr may have some improvement on systematic biases of 24h accumulated rainfall and show some correction ability on local characteristics of rainfall distribution. Besides,the tendency to improve strong rainfall could be sensitive to μr. With the improvement of the diagnosis of μr using the empirically diagnostic formulas, μr increases generally in the middle- and lower-troposphere and decreases with the stronger rainfall. Its conclued that, the decline in raindrop water content and the increased raindrop mass-weighted average terminal velocity directly related to μr are the direct reasons of variations in the precipitation.On the other side, the environmental conditions including relative humidity and dynamical parameters are the key indirectly causes which has close relationships with the changes in cloud particles and rainfall distributions.Furthermore,the differences in the scale of improvement between the weak and heavy rainfall mainly come from the distinctions of response features about their variable fields respectively. The extent of variation in the features of cloud particles in warm clouds of heavy rainfall differs greatly from that of weak rainfall, though they share the same trend of variation. On the conditions of weak rainfall, the response of physical characteristics to μr performed consistent trends and some linear features. However, environmental conditions of relative humidity and dynamical parameters perform strong and vertically deep adjustments in the heavy precipitation with vigorous cloud systems. In this case, the microphysical processes and environmental conditions experience complex interactions with each other and no significant laws could be concluded.

  15. Quality index of radiological devices: results of one year of use.

    PubMed

    Tofani, Alessandro; Imbordino, Patrizia; Lecci, Antonio; Bonannini, Claudia; Del Corona, Alberto; Pizzi, Stefano

    2003-01-01

    The physical quality index (QI) of radiological devices summarises in a single numerical value between 0 and 1 the results of constancy tests. The aim of this paper is to illustrate the results of the use of such an index on all public radiological devices in the Livorno province over one year. The quality index was calculated for 82 radiological devices of a wide range of types by implementing its algorithm in a spreadsheet-based software for the automatic handling of quality control data. The distribution of quality index values was computed together with the associated statistical quantities. This distribution is strongly asymmetrical, with a sharp peak near the highest QI values. The mean quality index values for the different types of device show some inhomogeneity: in particular, mammography and panoramic dental radiography devices show far lower quality than other devices. In addition, our analysis has identified the parameters that most frequently do not pass the quality tests for each type of device. Finally, we sought some correlation between quality and age of the device, but this was poorly significant. The quality index proved to be a useful tool providing an overview of the physical conditions of radiological devices. By selecting adequate QI threshold values for, it also helps to decide whether a given device should be upgraded or replaced. The identification of critical parameters for each type of device may be used to improve the definition of the QI by attributing greater weights to critical parameters, so as to better address the maintenance of radiological devices.

  16. Research on the semi-distributed monthly rainfall runoff model at the Lancang River basin based on DEM

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Zhao, Rong; Liu, Jiping; Zhang, Qingpu

    2007-06-01

    The Lancang River Basin is so narrow and its hydrological and meteorological information are so flexible. The Rainfall, evaporation, glacial melt water and groundwater affect the runoff whose replenishment forms changing notable with the season in different areas at the basin. Characters of different kind of distributed model and conceptual hydrological model are analyzed. A semi-distributed hydrological model of relation between monthly runoff and rainfall, temperate and soil type has been built in Changdu County based on Visual Basic and ArcObject. The way of discretization of distributed hydrological model was used in the model, and principles of conceptual model are taken into account. The sub-catchment of Changdu is divided into regular cells, and all kinds of hydrological and meteorological information and land use classes and slope extracted from 1:250000 digital elevation models are distributed in each cell. The model does not think of the rainfall-runoff hydro-physical process but use the conceptual model to simulate the whole contributes to the runoff of the area. The affection of evapotranspiration loss and underground water is taken into account at the same time. The spatial distribute characteristics of the monthly runoff in the area are simulated and analyzed with a few parameters.

  17. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations (Version 2)

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2017-05-01

    GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.

  18. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  19. Probabilistic inversion of AVO seismic data for reservoir properties and related uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zunino, Andrea; Mosegaard, Klaus

    2017-04-01

    Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.

  20. Simulating Bubble Plumes from Breaking Waves with a Forced-Air Venturi

    NASA Astrophysics Data System (ADS)

    Long, M. S.; Keene, W. C.; Maben, J. R.; Chang, R. Y. W.; Duplessis, P.; Kieber, D. J.; Beaupre, S. R.; Frossard, A. A.; Kinsey, J. D.; Zhu, Y.; Lu, X.; Bisgrove, J.

    2017-12-01

    It has been hypothesized that the size distribution of bubbles in subsurface seawater is a major factor that modulates the corresponding size distribution of primary marine aerosol (PMA) generated when those bubbles burst at the air-water interface. A primary physical control of the bubble size distribution produced by wave breaking is the associated turbulence that disintegrates larger bubbles into smaller ones. This leads to two characteristic features of bubble size distributions: (1) the Hinze scale which reflects a bubble size above which disintegration is possible based on turbulence intensity and (2) the slopes of log-linear regressions of the size distribution on either side of the Hinze scale that indicate the state of plume evolution or age. A Venturi with tunable seawater and forced air flow rates was designed and deployed in an artificial PMA generator to produce bubble plumes representative of breaking waves. This approach provides direct control of turbulence intensity and, thus, the resulting bubble size distribution characterizable by observations of the Hinze scale and the simulated plume age over a range of known air detrainment rates. Evaluation of performance in different seawater types over the western North Atlantic demonstrated that the Venturi produced bubble plumes with parameter values that bracket the range of those observed in laboratory and field experiments. Specifically, the seawater flow rate modulated the value of the Hinze scale while the forced-air flow rate modulated the plume age parameters. Results indicate that the size distribution of sub-surface bubbles within the generator did not significantly modulate the corresponding number size distribution of PMA produced via bubble bursting.

  1. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  2. Two modelling approaches to water-quality simulation in a flooded iron-ore mine (Saizerais, Lorraine, France): a semi-distributed chemical reactor model and a physically based distributed reactive transport pipe network model.

    PubMed

    Hamm, V; Collon-Drouaillet, P; Fabriol, R

    2008-02-19

    The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more detailed information on flow and chemical behaviour (dissolved sulphate concentrations, remaining mass of solid sulphate) in the network. Nevertheless, both modelling methods require hydrological and chemical parameters (recharge flow rate, outflows, volume of mine voids, mass of solids, kinetic constants of the dissolution-precipitation reactions), which are commonly not available for a mine and therefore call for calibration data.

  3. Detection of cancerous cervical cells using physical adhesion of fluorescent silica particles and centripetal force.

    PubMed

    Gaikwad, Ravi M; Dokukin, Maxim E; Iyer, K Swaminathan; Woodworth, Craig D; Volkov, Dmytro O; Sokolov, Igor

    2011-04-07

    Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical adhesion between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. © The Royal Society of Chemistry 2011

  4. Study of the statistical physics bases on superstatistics from the β-fluctuated to the T-fluctuated form

    NASA Astrophysics Data System (ADS)

    Sargolzaeipor, S.; Hassanabadi, H.; Chung, W. S.

    2018-04-01

    In this paper, we study the T -fluctuated form of superstatistics. In this form, some thermodynamic quantities such as the Helmholtz energy, the entropy and the internal energy, are expressed in terms of the T -fluctuated form for a canonical ensemble. In addition, the partition functions in the formalism for 2-level and 3-level distributions are derived. Then we make use of the T -fluctuated superstatistics for a quantum harmonic oscillator problem and the thermal properties of the system for three statistics of the Bose-Einstein, Maxwell-Boltzmann and Fermi-Dirac statistics are calculated. The effect of the deformation parameter on these properties is examined. All the results recover the well-known results by removing the deformation parameter.

  5. Analysis of Fractional Flow for Transient Two-Phase Flow in Fractal Porous Medium

    NASA Astrophysics Data System (ADS)

    Lu, Ting; Duan, Yonggang; Fang, Quantang; Dai, Xiaolu; Wu, Jinsui

    2016-03-01

    Prediction of fractional flow in fractal porous medium is important for reservoir engineering and chemical engineering as well as hydrology. A physical conceptual fractional flow model of transient two-phase flow is developed in fractal porous medium based on the fractal characteristics of pore-size distribution and on the approximation that porous medium consist of a bundle of tortuous capillaries. The analytical expression for fractional flow for wetting phase is presented, and the proposed expression is the function of structural parameters (such as tortuosity fractal dimension, pore fractal dimension, maximum and minimum diameters of capillaries) and fluid properties (such as contact angle, viscosity and interfacial tension) in fractal porous medium. The sensitive parameters that influence fractional flow and its derivative are formulated, and their impacts on fractional flow are discussed.

  6. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  7. Atmosphere, Ocean, Land, and Solar Irradiance Data Sets

    NASA Technical Reports Server (NTRS)

    Johnson, James; Ahmad, Suraiya

    2003-01-01

    The report present the atmosphere, ocean color, land and solar irradiation data sets. The data presented: total ozone, aerosol, cloud optical and physical parameters, temperature and humidity profiles, radiances, rain fall, drop size distribution.

  8. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  9. A discrete element method-based approach to predict the breakage of coal

    DOE PAGES

    Gupta, Varun; Sun, Xin; Xu, Wei; ...

    2017-08-05

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  10. On the superposition principle in interference experiments.

    PubMed

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  11. General solution of the chemical master equation and modality of marginal distributions for hierarchic first-order reaction networks.

    PubMed

    Reis, Matthias; Kromer, Justus A; Klipp, Edda

    2018-01-20

    Multimodality is a phenomenon which complicates the analysis of statistical data based exclusively on mean and variance. Here, we present criteria for multimodality in hierarchic first-order reaction networks, consisting of catalytic and splitting reactions. Those networks are characterized by independent and dependent subnetworks. First, we prove the general solvability of the Chemical Master Equation (CME) for this type of reaction network and thereby extend the class of solvable CME's. Our general solution is analytical in the sense that it allows for a detailed analysis of its statistical properties. Given Poisson/deterministic initial conditions, we then prove the independent species to be Poisson/binomially distributed, while the dependent species exhibit generalized Poisson/Khatri Type B distributions. Generalized Poisson/Khatri Type B distributions are multimodal for an appropriate choice of parameters. We illustrate our criteria for multimodality by several basic models, as well as the well-known two-stage transcription-translation network and Bateman's model from nuclear physics. For both examples, multimodality was previously not reported.

  12. Magneto hall effect on unsteady elastico-viscous nanofluid slip flow in a channel in presence of thermal radiation and heat generation with Brownian motion

    NASA Astrophysics Data System (ADS)

    Karim, M. Enamul; Samad, M. Abdus; Ferdows, M.

    2017-06-01

    The present note investigates the magneto hall effect on unsteady flow of elastico-viscous nanofluid in a channel with slip boundary considering the presence of thermal radiation and heat generation with Brownian motion. Numerical results are achieved by solving the governing equations by the implicit Finite Difference Method (FDM) obtaining primary and secondary velocities, temperature, nanoparticles volume fraction and concentration distributions within the boundary layer entering into the problem. The influences of several interesting parameters such as elastico-viscous parameter, magnetic field, hall parameter, heat generation, thermal radiation and Brownian motion parameters on velocity, heat and mass transfer characteristics of the fluid flow are discussed with the help of graphs. Also the effects of the pertinent parameters, which are of physical and engineering interest, such as Skin friction parameter, Nusselt number and Sherwood number are sorted out. It is found that the flow field and other quantities of physical concern are significantly influenced by these parameters.

  13. Distributed activation energy model parameters of some Turkish coals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunes, M.; Gunes, S.K.

    2008-07-01

    A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.

  14. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases.

  15. Comparison of Two Conceptually Different Physically-based Hydrological Models - Looking Beyond Streamflows

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.

    2015-12-01

    Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.

  16. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  17. Physically-Based Models for the Reflection, Transmission and Subsurface Scattering of Light by Smooth and Rough Surfaces, with Applications to Realistic Image Synthesis

    NASA Astrophysics Data System (ADS)

    He, Xiao Dong

    This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.

  18. Physical parameters in long-decay coronal enhancements. [from Skylab X ray observations

    NASA Technical Reports Server (NTRS)

    Maccombie, W. J.; Rust, D. M.

    1979-01-01

    Four well-observed long-decay X-ray enhancements (LDEs) are examined which were associated with filament eruptions, white-light transients, and loop prominence systems. In each case the physical parameters of the X-ray-emitting plasma are determined, including the spatial distribution and temporal evolution of temperature and density. The results and recent analyses of other aspects of the four LDEs are compared with current models of loop prominence systems. It is concluded that only a magnetic-reconnection model, such as that proposed by Kopp and Pneuman (1976) is consistent with the observations.

  19. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  20. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  1. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  2. Peristaltic blood flow with gold nanoparticles as a third grade nanofluid in catheter: Application of cancer therapy

    NASA Astrophysics Data System (ADS)

    Mekheimer, Kh. S.; Hasona, W. M.; Abo-Elkhair, R. E.; Zaher, A. Z.

    2018-01-01

    Cancer is dangerous and deadly to most of its patients. Recent studies have shown that gold nanoparticles can cure and overcome it, because these particles have a high atomic number which produce the heat and leads to treatment of malignancy tumors. A motivation of this article is to study the effect of heat transfer with the blood flow (non-Newtonian model) containing gold nanoparticles in a gap between two coaxial tubes, the outer tube has a sinusoidal wave traveling down its wall and the inner tube is rigid. The governing equations of third-grade fluid along with total mass, thermal energy and nanoparticles are simplified by using the assumption of long wavelength. Exact solutions have been evaluated for temperature distribution and nanoparticles concentration, while approximate analytical solutions are found for the velocity distribution using the regular perturbation method with a small third grade parameter. Influence of the physical parameters such as third grade parameter, Brownian motion parameter and thermophoresis parameter on the velocity profile, temperature distribution and nanoparticles concentration are considered. The results pointed to that the gold nanoparticles are effective for drug carrying and drug delivery systems because they control the velocity through the Brownian motion parameter Nb and thermophoresis parameter Nt. Gold nanoparticles also increases the temperature distribution, making it able to destroy cancer cells.

  3. Sediment Acoustics: Wideband Model, Reflection Loss and Ambient Noise Inversion

    DTIC Science & Technology

    2010-01-01

    DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Sediment acoustics : Wideband model , reflection loss and...Physically sound models of acoustic interaction with the ocean floor including penetration, reflection and scattering in support of MCM and ASW needs...OBJECTIVES (1) Consolidation of the BIC08 model of sediment acoustics , its verification in a variety of sediment types, parameter reduction and

  4. A pilot study of physical activity and sedentary behavior distribution patterns in older women.

    PubMed

    Fortune, Emma; Mundell, Benjamin; Amin, Shreyasee; Kaufman, Kenton

    2017-09-01

    The study aims were to investigate free-living physical activity and sedentary behavior distribution patterns in a group of older women, and assess the cross-sectional associations with body mass index (BMI). Eleven older women (mean (SD) age: 77 (9) yrs) wore custom-built activity monitors, each containing a tri-axial accelerometer (±16g, 100Hz), on the waist and ankle for lab-based walking trials and 4 days in free-living. Daily active time, step counts, cadence, and sedentary break number were estimated from acceleration data. The sedentary bout length distribution and sedentary time accumulation pattern, using the Gini index, were investigated. Associations of the parameters' total daily values and coefficients of variation (CVs) of their hourly values with BMI were assessed using linear regression. The algorithm demonstrated median sensitivity, positive predictive value, and agreement values >98% and <1% mean error in cadence calculations with video identification during lab trials. Participants' sedentary bouts were found to be power law distributed with 56% of their sedentary time occurring in 20min bouts or longer. Meaningful associations were detectable in the relationships of total active time, step count, sedentary break number and their CVs with BMI. Active time and step counts had moderate negative associations with BMI while sedentary break number had a strong negative association. Active time, step count and sedentary break number CVs also had strong positive associations with BMI. The results highlight the importance of measuring sedentary behavior and suggest a more even distribution of physical activity throughout the day is associated with lower BMI. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Multivariate Statistical Analysis of Cigarette Design Feature Influence on ISO TNCO Yields.

    PubMed

    Agnew-Heard, Kimberly A; Lancaster, Vicki A; Bravo, Roberto; Watson, Clifford; Walters, Matthew J; Holman, Matthew R

    2016-06-20

    The aim of this study is to explore how differences in cigarette physical design parameters influence tar, nicotine, and carbon monoxide (TNCO) yields in mainstream smoke (MSS) using the International Organization of Standardization (ISO) smoking regimen. Standardized smoking methods were used to evaluate 50 U.S. domestic brand cigarettes and a reference cigarette representing a range of TNCO yields in MSS collected from linear smoking machines using a nonintense smoking regimen. Multivariate statistical methods were used to form clusters of cigarettes based on their ISO TNCO yields and then to explore the relationship between the ISO generated TNCO yields and the nine cigarette physical design parameters between and within each cluster simultaneously. The ISO generated TNCO yields in MSS are 1.1-17.0 mg tar/cigarette, 0.1-2.2 mg nicotine/cigarette, and 1.6-17.3 mg CO/cigarette. Cluster analysis divided the 51 cigarettes into five discrete clusters based on their ISO TNCO yields. No one physical parameter dominated across all clusters. Predicting ISO machine generated TNCO yields based on these nine physical design parameters is complex due to the correlation among and between the nine physical design parameters and TNCO yields. From these analyses, it is estimated that approximately 20% of the variability in the ISO generated TNCO yields comes from other parameters (e.g., filter material, filter type, inclusion of expanded or reconstituted tobacco, and tobacco blend composition, along with differences in tobacco leaf origin and stalk positions and added ingredients). A future article will examine the influence of these physical design parameters on TNCO yields under a Canadian Intense (CI) smoking regimen. Together, these papers will provide a more robust picture of the design features that contribute to TNCO exposure across the range of real world smoking patterns.

  6. An application of synthetic seismicity in earthquake statistics - The Middle America Trench

    NASA Technical Reports Server (NTRS)

    Ward, Steven N.

    1992-01-01

    The way in which seismicity calculations which are based on the concept of fault segmentation incorporate the physics of faulting through static dislocation theory can improve earthquake recurrence statistics and hone the probabilities of hazard is shown. For the Middle America Trench, the spread parameters of the best-fitting lognormal or Weibull distributions (about 0.75) are much larger than the 0.21 intrinsic spread proposed in the Nishenko Buland (1987) hypothesis. Stress interaction between fault segments disrupts time or slip predictability and causes earthquake recurrence to be far more aperiodic than has been suggested.

  7. Influence of thermal stratification and slip conditions on stagnation point flow towards variable thicked Riga plate

    NASA Astrophysics Data System (ADS)

    Anjum, A.; Mir, N. A.; Farooq, M.; Khan, M. Ijaz; Hayat, T.

    2018-06-01

    This article addresses thermally stratified stagnation point flow of viscous fluid induced by a non-linear variable thicked Riga plate. Velocity and thermal slip effects are incorporated to disclose the flow analysis. Solar thermal radiation phenomenon is implemented to address the characteristics of heat transfer. Variations of different physical parameters on the horizontal velocity and temperature distributions are described through graphs. Graphical interpretations of skin friction coefficient (drag force at the surface) and Nusselt number (rate of heat transfer) are also addressed. Modified Hartman number and thermal stratification parameter result in reduction of temperature distribution.

  8. The Gaussian CL s method for searches of new physics

    DOE PAGES

    Qian, X.; Tan, A.; Ling, J. J.; ...

    2016-04-23

    Here we describe a method based on the CL s approach to present results in searches of new physics, under the condition that the relevant parameter space is continuous. Our method relies on a class of test statistics developed for non-nested hypotheses testing problems, denoted by ΔT, which has a Gaussian approximation to its parent distribution when the sample size is large. This leads to a simple procedure of forming exclusion sets for the parameters of interest, which we call the Gaussian CL s method. Our work provides a self-contained mathematical proof for the Gaussian CL s method, that explicitlymore » outlines the required conditions. These conditions are milder than that required by the Wilks' theorem to set confidence intervals (CIs). We illustrate the Gaussian CL s method in an example of searching for a sterile neutrino, where the CL s approach was rarely used before. We also compare data analysis results produced by the Gaussian CL s method and various CI methods to showcase their differences.« less

  9. Characterizing the size and shape of sea ice floes

    PubMed Central

    Gherardi, Marco; Lagomarsino, Marco Cosentino

    2015-01-01

    Monitoring drift ice in the Arctic and Antarctic regions directly and by remote sensing is important for the study of climate, but a unified modeling framework is lacking. Hence, interpretation of the data, as well as the decision of what to measure, represent a challenge for different fields of science. To address this point, we analyzed, using statistical physics tools, satellite images of sea ice from four different locations in both the northern and southern hemispheres, and measured the size and the elongation of ice floes (floating pieces of ice). We find that (i) floe size follows a distribution that can be characterized with good approximation by a single length scale , which we discuss in the framework of stochastic fragmentation models, and (ii) the deviation of their shape from circularity is reproduced with remarkable precision by a geometric model of coalescence by freezing, based on random Voronoi tessellations, with a single free parameter expressing the shape disorder. Although the physical interpretations remain open, this advocates the parameters and as two independent indicators of the environment in the polar regions, which are easily accessible by remote sensing. PMID:26014797

  10. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Treesearch

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  11. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.

  12. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Microbubble Sizing and Shell Characterization Using Flow Cytometry

    PubMed Central

    Tu, Juan; Swalwell, Jarred E.; Giraud, David; Cui, Weicheng; Chen, Weizhong; Matula, Thomas J.

    2015-01-01

    Experiments were performed to size, count, and obtain shell parameters for individual ultrasound contrast microbubbles using a modified flow cytometer. Light scattering was modeled using Mie theory, and applied to calibration beads to calibrate the system. The size distribution and population were measured directly from the flow cytometer. The shell parameters (shear modulus and shear viscosity) were quantified at different acoustic pressures (from 95 to 333 kPa) by fitting microbubble response data to a bubble dynamics model. The size distribution of the contrast agent microbubbles is consistent with manufacturer specifications. The shell shear viscosity increases with increasing equilibrium microbubble size, and decreases with increasing shear rate. The observed trends are independent of driving pressure amplitude. The shell elasticity does not vary with microbubble size. The results suggest that a modified flow cytometer can be an effective tool to characterize the physical properties of microbubbles, including size distribution, population, and shell parameters. PMID:21622051

  14. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    NASA Astrophysics Data System (ADS)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  15. Voronoi cell patterns: Theoretical model and applications

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  16. Voronoi Cell Patterns: theoretical model and application to submonolayer growth

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2012-02-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.

  17. A Physical Model to Estimate Snowfall over Land using AMSU-B Observations

    NASA Technical Reports Server (NTRS)

    Kim, Min-Jeong; Weinman, J. A.; Olson, W. S.; Chang, D.-E.; Skofronick-Jackson, G.; Wang, J. R.

    2008-01-01

    In this study, we present an improved physical model to retrieve snowfall rate over land using brightness temperature observations from the National Oceanic and Atmospheric Administration's (NOAA) Advanced Microwave Sounder Unit-B (AMSU-B) at 89 GHz, 150 GHz, 183.3 +/- 1 GHz, 183.3 +/- 3 GHz, and 183.3 +/- 7 GHz. The retrieval model is applied to the New England blizzard of March 5, 2001 which deposited about 75 cm of snow over much of Vermont, New Hampshire, and northern New York. In this improved physical model, prior retrieval assumptions about snowflake shape, particle size distributions, environmental conditions, and optimization methodology have been updated. Here, single scattering parameters for snow particles are calculated with the Discrete-Dipole Approximation (DDA) method instead of assuming spherical shapes. Five different snow particle models (hexagonal columns, hexagonal plates, and three different kinds of aggregates) are considered. Snow particle size distributions are assumed to vary with air temperature and to follow aircraft measurements described by previous studies. Brightness temperatures at AMSU-B frequencies for the New England blizzard are calculated using these DDA calculated single scattering parameters and particle size distributions. The vertical profiles of pressure, temperature, relative humidity and hydrometeors are provided by MM5 model simulations. These profiles are treated as the a priori data base in the Bayesian retrieval algorithm. In algorithm applications to the blizzard data, calculated brightness temperatures associated with selected database profiles agree with AMSU-B observations to within about +/- 5 K at all five frequencies. Retrieved snowfall rates compare favorably with the near-concurrent National Weather Service (NWS) radar reflectivity measurements. The relationships between the NWS radar measured reflectivities Z(sub e) and retrieved snowfall rate R for a given snow particle model are derived by a histogram matching technique. All of these Z(sub e)-R relationships fall in the range of previously established Z(sub e)-R relationships for snowfall. This suggests that the current physical model developed in this study can reliably estimate the snowfall rate over land using the AMSU-B measured brightness temperatures.

  18. Application of a fully integrated surface-subsurface physically based flow model for evaluating groundwater recharge from a flash flood event

    NASA Astrophysics Data System (ADS)

    Pino, Cristian; Herrera, Paulo; Therrien, René

    2017-04-01

    In many arid regions around the world groundwater recharge occurs during flash floods. This transient spatially and temporally concentrated flood-recharge process takes place through the variably saturated zone between surface and usually the deep groundwater table. These flood events are characterized by rapid and extreme changes in surface flow depth and velocity and soil moisture conditions. Infiltration rates change over time controlled by the hydraulic gradients and the unsaturated hydraulic conductivity at the surface-subsurface interface. Today is a challenge to assess the spatial and temporal distribution of groundwater recharge from flash flood events under real field conditions at different scales in arid areas. We apply an integrated surface-subsurface variably saturated physically-based flow model at the watershed scale to assess the recharge process during and after a flash flood event registered in an arid fluvial valley in Northern Chile. We are able to reproduce reasonably well observed groundwater levels and surface flow discharges during and after the flood with a calibrated model. We also investigate the magnitude and spatio-temporal distribution of recharge and the response of the system to variations of different surface and subsurface parameters, initial soil moisture content and groundwater table depths and surface flow conditions. We demonstrate how an integrated physically based model allows the exploration of different spatial and temporal system states, and that the analysis of the results of the simulations help us to improve our understanding of the recharge processes in similar type of systems that are common to many arid areas around the world.

  19. Transforming for Distribution Based Logistics

    DTIC Science & Technology

    2005-05-26

    distribution process, and extracts elements of distribution and distribution management . Finally characteristics of an effective Army distribution...eventually evolve into a Distribution Management Element. Each organization is examined based on their ability to provide centralized command, with an...distribution and distribution management that together form the distribution system. Clearly all of the physical distribution activities including

  20. Time scale variations of the physical parameters of the Si IV resonance lines in the case of the Be star HD 50138

    NASA Astrophysics Data System (ADS)

    Stathopoulos, D.

    2012-01-01

    As it is well known many lines in the spectra of hot emission stars (Be and Oe) present peculiar and very complex profiles. As a result, we cannot find a classical theoretical distribution in order to fit these profiles. Because of this, we are not able to calculate the physical parameters of the regions were these lines are created. In this paper, using the Gauss-Rotation model (GR-model Danezis et al), that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs), we study the UV Si IV (λλ 1393.755, 1402.77 A) resonance lines of the Be star HD 50138 in three different periods. From this analysis we can calculate the values of a group of physical parameters. The parameters are the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM) an the absorbed energy of the independent regions of matter which produce the main and the satellite components of the studied spectral line. Finally we calculate the time scale variations of the above physical parameters.

  1. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  2. The computation of lipophilicities of ⁶⁴Cu PET systems based on a novel approach for fluctuating charges.

    PubMed

    Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger

    2013-08-21

    A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.

  3. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  4. Statistical mechanics in the context of special relativity. II.

    PubMed

    Kaniadakis, G

    2005-09-01

    The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.

  5. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  6. A sensitivity analysis of a surface energy balance model to LAI (Leaf Area Index)

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Cannarozzo, M.; Capodici, F.; La Loggia, G.; Santangelo, T.

    2008-10-01

    The LAI is a key parameter in hydrological processes, especially in the physically based distribution models. It is a critical ecosystem attribute since physiological processes such as photosynthesis, transpiration and evaporation depend on it. The diffusion of water vapor, momentum, heat and light through the canopy is regulated by the distribution and density of the leaves, branches, twigs and stems. The LAI influences the sensible heat flux H in the surface energy balance single source models through the calculation of the roughness length and of the displacement height. The aerodynamic resistance between the soil and within-canopy source height is a function of the LAI through the roughness length. This research carried out a sensitivity analysis of some of the most important parameters of surface energy balance models to the LAI time variation, in order to take into account the effects of the LAI variation with the phenological period. Finally empirical retrieved relationships between field spectroradiometric data and the field LAI measured via a light-sensitive instrument are presented for a cereal field.

  7. Occurrence of Toxic Cyanobacterial Blooms in Rio de la Plata Estuary, Argentina: Field Study and Data Analysis

    PubMed Central

    Giannuzzi, L.; Carvajal, G.; Corradini, M. G.; Araujo Andrade, C.; Echenique, R.; Andrinolo, D.

    2012-01-01

    Water samples were collected during 3 years (2004–2007) at three sampling sites in the Rio de la Plata estuary. Thirteen biological, physical, and chemical parameters were determined on the water samples. The presence of microcystin-LR in the reservoir samples, and also in domestic water samples, was confirmed and quantified. Microcystin-LR concentration ranged between 0.02 and 8.6 μg.L−1. Principal components analysis was used to identify the factors promoting cyanobacteria growth. The proliferation of cyanobacteria was accompanied by the presence of high total and fecal coliforms bacteria (>1500 MNP/100 mL), temperature ≥25°C, and total phosphorus content ≥1.24 mg·L−1. The observed fluctuating patterns of Microcystis aeruginosa, total coliforms, and Microcystin-LR were also described by probabilistic models based on the log-normal and extreme value distributions. The sampling sites were compared in terms of the distribution parameters and the probability of observing high concentrations for Microcystis aeruginosa, total coliforms, and microcystin-LR concentration. PMID:22523486

  8. Multiscale power analysis for heart rate variability

    NASA Astrophysics Data System (ADS)

    Zeng, Peng; Liu, Hongxing; Ni, Huangjing; Zhou, Jing; Xia, Lan; Ning, Xinbao

    2015-06-01

    We first introduce multiscale power (MSP) method to assess the power distribution of physiological signals on multiple time scales. Simulation on synthetic data and experiments on heart rate variability (HRV) are tested to support the approach. Results show that both physical and psychological changes influence power distribution significantly. A quantitative parameter, termed power difference (PD), is introduced to evaluate the degree of power distribution alteration. We find that dynamical correlation of HRV will be destroyed completely when PD>0.7.

  9. Effects of Variable Thermal Conductivity and Non-linear Thermal Radiation Past an Eyring Powell Nanofluid Flow with Chemical Reaction

    NASA Astrophysics Data System (ADS)

    Ramzan, M.; Bilal, M.; Kanwal, Shamsa; Chung, Jae Dong

    2017-06-01

    Present analysis discusses the boundary layer flow of Eyring Powell nanofluid past a constantly moving surface under the influence of nonlinear thermal radiation. Heat and mass transfer mechanisms are examined under the physically suitable convective boundary condition. Effects of variable thermal conductivity and chemical reaction are also considered. Series solutions of all involved distributions using Homotopy Analysis method (HAM) are obtained. Impacts of dominating embedded flow parameters are discussed through graphical illustrations. It is observed that thermal radiation parameter shows increasing tendency in relation to temperature profile. However, chemical reaction parameter exhibits decreasing behavior versus concentration distribution. Supported by the World Class 300 Project (No. S2367878) of the SMBA (Korea)

  10. Assessment of catchments' flooding potential: a physically-based analytical tool

    NASA Astrophysics Data System (ADS)

    Botter, G.; Basso, S.; Schirmer, M.

    2016-12-01

    The assessment of the flooding potential of river catchments is critical in many research and applied fields, ranging from river science and geomorphology to urban planning and the insurance industry. Predicting magnitude and frequency of floods is key to prevent and mitigate the negative effects of high flows, and has therefore long been the focus of hydrologic research. Here, the recurrence intervals of seasonal flow maxima are estimated through a novel physically-based analytic approach, which links the extremal distribution of streamflows to the stochastic dynamics of daily discharge. An analytical expression of the seasonal flood-frequency curve is provided, whose parameters embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which expresses catchment saturation prior to rainfall events, needs to be calibrated on the observed maxima. The method has been tested in a set of catchments featuring heterogeneous daily flow regimes. The model is able to reproduce characteristic shapes of flood-frequency curves emerging in erratic and persistent flow regimes and provides good estimates of seasonal flow maxima in different climatic regions. Performances are steady when the magnitude of events with return times longer than the available sample size is estimated. This makes the approach especially valuable for regions affected by data scarcity.

  11. Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition.

    PubMed

    Lippa, Katherine D; Feufel, Markus A; Robinson, F Eric; Shalin, Valerie L

    2017-06-01

    Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training.

  12. A Comparison of Filter-based Approaches for Model-based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai

    2012-01-01

    Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.

  13. Constraints on the near-Earth asteroid obliquity distribution from the Yarkovsky effect

    NASA Astrophysics Data System (ADS)

    Tardioli, C.; Farnocchia, D.; Rozitis, B.; Cotto-Figueroa, D.; Chesley, S. R.; Statler, T. S.; Vasile, M.

    2017-12-01

    Aims: From light curve and radar data we know the spin axis of only 43 near-Earth asteroids. In this paper we attempt to constrain the spin axis obliquity distribution of near-Earth asteroids by leveraging the Yarkovsky effect and its dependence on an asteroid's obliquity. Methods: By modeling the physical parameters driving the Yarkovsky effect, we solve an inverse problem where we test different simple parametric obliquity distributions. Each distribution results in a predicted Yarkovsky effect distribution that we compare with a χ2 test to a dataset of 125 Yarkovsky estimates. Results: We find different obliquity distributions that are statistically satisfactory. In particular, among the considered models, the best-fit solution is a quadratic function, which only depends on two parameters, favors extreme obliquities consistent with the expected outcomes from the YORP effect, has a 2:1 ratio between retrograde and direct rotators, which is in agreement with theoretical predictions, and is statistically consistent with the distribution of known spin axes of near-Earth asteroids.

  14. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more conservative than those yielded with the infinite slope stability model. The sensitivity of AUCROC to variations in the geohydraulic parameters remains small as long as the calculated degree of saturation of the soils is sufficient to result in the prediction of a significant amount of landslide release pixels. Due to the poor sensitivity of AUCROC to variations of the geotechnical and geohydraulic parameters it is hard to optimize the parameters by means of statistics. Instead, the results produced with many different combinations of parameters correspond reasonably well with the distribution of the observed landslide release areas, even though they vary considerably in terms of their conservativeness. Considering the uncertainty inherent in all geotechnical and geohydraulic data, and the impossibility to capture the spatial distribution of the parameters by means of laboratory tests in sufficient detail, we conclude that landslide susceptibility maps yielded by catchment-scale physically-based models should not be interpreted in absolute terms. Building on the assumption that our findings are generally valid, we suggest that efforts to develop better strategies for dealing with the uncertainties in the spatial variation of the key parameters should be given priority in future slope stability modelling efforts.

  15. Disentangling rotational velocity distribution of stars

    NASA Astrophysics Data System (ADS)

    Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra

    2017-11-01

    Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.

  16. Impact of baryonic physics on intrinsic alignments

    DOE PAGES

    Tenneti, Ananth; Gnedin, Nickolay Y.; Feng, Yu

    2017-01-11

    We explore the effects of specific assumptions in the subgrid models of star formation and stellar and AGN feedback on intrinsic alignments of galaxies in cosmological simulations of "MassiveBlack-II" family. Using smaller volume simulations, we explored the parameter space of the subgrid star formation and feedback model and found remarkable robustness of the observable statistical measures to the details of subgrid physics. The one observational probe most sensitive to modeling details is the distribution of misalignment angles. We hypothesize that the amount of angular momentum carried away by the galactic wind is the primary physical quantity that controls the orientationmore » of the stellar distribution. Finally, our results are also consistent with a similar study by the EAGLE simulation team.« less

  17. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    NASA Astrophysics Data System (ADS)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  18. The spectroscopic orbits and physical parameters of GG Carinae

    NASA Astrophysics Data System (ADS)

    Marchiano, P.; Brandi, E.; Muratore, M. F.; Quiroga, C.; Ferrer, O. E.; García, L. G.

    2012-04-01

    Aims: GG Car is an eclipsing binary classified as a B[e] supergiant star. The aims of our study are to improve the orbital elements of the binary system in order to obtain the actual orbital period of this system. We also compare the spectral energy distribution of the observed fluxes over a wide wavelength range with a model of a circumstellar envelope composed of gas and dust. This fitting allows us to derive the physical parameters of the system and its environment, as well as to obtain an estimation of the distance to GG Car. Methods: We analyzed about 55 optical and near infrared spectrograms taken during 1996-2010. The spectroscopic orbits were obtained by measuring the radial velocities of the blueshifted absorptions of the He I P-Cygni profiles, which are very representative of the orbital motion of both stars. On the other hand, we modeled the spectral energy distribution of GG Car, proposing a simple model of a spherical envelope consisting of a layer close to the central star composed of ionized gas and other outermost layers composed of dust. Its effect on the spectral energy distribution considering a central B-type star is presented. Comparing the model with the observed continuum energy distribution of GG Car, we can derive fundamental parameters of the system, as well as global physical properties of the gas and dust envelope. It is also possible to estimate the distance taking the spectral regions into account where the theoretical data fit the observational data very well and using the set of parameters obtained and the value of the observed flux for different wavelengths. Results: For the first time, we have determined the orbits for both components of the binary through a detailed study of the He I lines, at λλ4471, 5875, 6678, and 7065 Å, thereby obtaining an orbital period of 31.033 days. An eccentric orbit with e = 0.28 and a mass ratio q = 2.2 ± 0.9 were calculated. Comparing the model with the observed continuum energy distribution of GG Car, we obtain Teff = 23 000 K and log g = 3. The central star is surrounded by a spherical envelope consisting of a layer of 3.5 stellar radii composed of ionized gas and other outermost dust layers with EB - V = 0.39. These calculations are not strongly modified if we consider two similar B-type stars instead of a central star, provided our model suggests that the second star might contribute less than 10% of the primary flux. The calculated effective temperature is consistent with an spectral type B0-B2 and a distance to the object of 5 ± 1 kpc was determined. Based on observations taken at Complejo Astronómico EL LEONCITO, operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina and the National Universities of La Plata, Córdoba, and San Juan.

  19. Three-Dimensional Electron Beam Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Shiu, Almon Sowchee

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.

  20. Design of Magnetic Charged Particle Lens Using Analytical Potential Formula

    NASA Astrophysics Data System (ADS)

    Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.

    2018-05-01

    In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.

  1. A Comprehensive Physical Impedance Model of Polymer Electrolyte Fuel Cell Cathodes in Oxygen-free Atmosphere.

    PubMed

    Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill

    2018-03-21

    Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.

  2. Physical Foundations of Plasma Microwave Sources Based on Anomalous Doppler Effect

    DTIC Science & Technology

    2007-09-17

    International Science and Technology Center ( ISTC ), Moscow. ISTC Project A-1512p Physical Foundations of Plasma Microwave Sources Based on Anomalous...07 – 31-Aug-07 5a. CONTRACT NUMBER ISTC Registration No: A-1512p 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Physical foundations of plasma microwave... ISTC 05-7008 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES

  3. Modeling uncertainty and correlation in soil properties using Restricted Pairing and implications for ensemble-based hillslope-scale soil moisture and temperature estimation

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Entekhabi, D.; Bras, R. L.

    2007-12-01

    Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.

  4. Quasar Spectral Energy Distributions As A Function Of Physical Property

    NASA Astrophysics Data System (ADS)

    Townsend, Shonda; Ganguly, R.; Stark, M. A.; Derseweh, J. A.; Richmond, J. M.

    2012-05-01

    Galaxy evolution models have shown that quasars are a crucial ingredient in the evolution of massive galaxies. Outflows play a key role in the story of quasars and their host galaxies, by helping regulate the accretion process, the star-formation rate and mass of the host galaxy (i.e., feedback). The prescription for modeling outflows as a contributor to feedback requires knowledge of the outflow velocity, geometry, and column density. In particular, we need to understand how these depend on physical parameters and how much is determined stochastically (and with what distribution). In turn, models of outflows have shown particular sensitivity to the shape of the spectral energy distribution (SED), depending on the UV luminosity to transfer momentum to the gas, the X-ray luminosity to regulate how efficiently that transfer can be, etc. To investigate how SED changes with physical properties, we follow up on Richards et al. (2006), who constructed SEDs with varying luminosity. Here, we construct SEDs as a function of redshift, and physical property (black hole mass, bolometric luminosity, Eddington ratio) for volume limited samples drawn from the Sloan Digital Sky Survey, with photometry supplemented from 2MASS, WISE, GALEX, ROSAT, and Chandra. To estimate black hole masses, we adopt the scaling relations from Greene & Ho (2005) based on the H-alpha emission line FWHM. This requires redshifts less than 0.4. To construct volume-limited subsamples, we begin by adopting g=19.8 as a nominal limiting magnitude over which we are guaranteed to detect z<0.4 quasars. At redshift 0.4, we are complete down to Mg=-21.8, which yields 3300 objects from Data Release 7. At z=0.1, we are complete down to Mg=-18.5. This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. 09-ADP09-0016 issued through the Astrophysics Data Analysis Program.

  5. Density-based global sensitivity analysis of sheet-flow travel time: Kinematic wave-based formulations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2018-04-01

    Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as flow moves downgradient that is affected by one or more factors including high excess rainfall intensities, low flow resistance, high degrees of imperviousness, long surfaces, steep slope, and domination of rainfall distribution as NRCS Type I, II, or III.

  6. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  7. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  8. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  9. Theory, development, and applicability of the surface water hydrologic model CASC2D

    NASA Astrophysics Data System (ADS)

    Downer, Charles W.; Ogden, Fred L.; Martin, William D.; Harmon, Russell S.

    2002-02-01

    Numerical tests indicate that Hortonian runoff mechanisms benefit from scaling effects that non-Hortonian runoff mechanisms do not share. This potentially makes Hortonian watersheds more amenable to physically based modelling provided that the physically based model employed properly accounts for rainfall distribution and initial soil moisture conditions, to which these types of model are highly sensitive. The distributed Hortonian runoff model CASC2D has been developed and tested for the US Army over the past decade. The purpose of the model is to provide the Army with superior predictions of runoff and stream-flow compared with the standard lumped parameter model HEC-1. The model is also to be used to help minimize negative effects on the landscape caused by US armed forces training activities. Development of the CASC2D model is complete and the model has been tested and applied at several locations. These applications indicate that the model can realistically reproduce hydrographs when properly applied. These applications also indicate that there may be many situations where the model is inadequate. Because of this, the Army is pursuing development of a new model, GSSHA, that will provide improved numerical stability and incorporate additional stream-flow-producing mechanisms and improved hydraulics.

  10. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  11. Continuum-based DFN-consistent numerical framework for the simulation of oxygen infiltration into fractured crystalline rocks

    NASA Astrophysics Data System (ADS)

    Trinchero, Paolo; Puigdomenech, Ignasi; Molinero, Jorge; Ebrahimi, Hedieh; Gylling, Björn; Svensson, Urban; Bosbach, Dirk; Deissmann, Guido

    2017-05-01

    We present an enhanced continuum-based approach for the modelling of groundwater flow coupled with reactive transport in crystalline fractured rocks. In the proposed formulation, flow, transport and geochemical parameters are represented onto a numerical grid using Discrete Fracture Network (DFN) derived parameters. The geochemical reactions are further constrained by field observations of mineral distribution. To illustrate how the approach can be used to include physical and geochemical complexities into reactive transport calculations, we have analysed the potential ingress of oxygenated glacial-meltwater in a heterogeneous fractured rock using the Forsmark site (Sweden) as an example. The results of high-performance reactive transport calculations show that, after a quick oxygen penetration, steady state conditions are attained where abiotic reactions (i.e. the dissolution of chlorite and the homogeneous oxidation of aqueous iron(II) ions) counterbalance advective oxygen fluxes. The results show that most of the chlorite becomes depleted in the highly conductive deformation zones where higher mineral surface areas are available for reactions.

  12. SU-E-P-05: Electronic Brachytherapy: A Physics Perspective On Field Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pai, S; Ayyalasomayajula, S; Lee, S

    2015-06-15

    Purpose: We want to summarize our experience implementing a successful program of electronic brachytherapy at several dermatology clinics with the help of a cloud based software to help us define the key program parameters and capture physics QA aspects. Optimally developed software helps the physicist in peer review and qualify the physical parameters. Methods: Using the XOFT™ Axxent™ electronic brachytherapy system in conjunction with a cloud-based software, a process was setup to capture and record treatments. It was implemented initially at about 10 sites in California. For dosimetric purposes, the software facilitated storage of the physics parameters of surface applicatorsmore » used in treatment and other source calibration parameters. In addition, the patient prescription, pathology and other setup considerations were input by radiation oncologist and the therapist. This facilitated physics planning of the treatment parameters and also independent check of the dwell time. From 2013–2014, nearly1500 such calculation were completed by a group of physicists. A total of 800 patients with multiple lesions have been treated successfully during this period. The treatment log files have been uploaded and documented in the software which facilitated physics peer review of treatments per the standards in place by AAPM and ACR. Results: The program model was implemented successfully at multiple sites. The cloud based software allowed for proper peer review and compliance of the program at 10 clinical sites. Dosimtery was done on 800 patients and executed in a timely fashion to suit the clinical needs. Accumulated physics data in the software from the clinics allows for robust analysis and future development. Conclusion: Electronic brachytherapy implementation experience from a quality assurance perspective was greatly enhanced by using a cloud based software. The comprehensive database will pave the way for future developments to yield superior physics outcomes.« less

  13. Evaluating CONUS-Scale Runoff Simulation across the National Water Model WRF-Hydro Implementation to Disentangle Regional Controls on Streamflow Generation and Model Error Contribution

    NASA Astrophysics Data System (ADS)

    Dugger, A. L.; Rafieeinasab, A.; Gochis, D.; Yu, W.; McCreight, J. L.; Karsten, L. R.; Pan, L.; Zhang, Y.; Sampson, K. M.; Cosgrove, B.

    2016-12-01

    Evaluation of physically-based hydrologic models applied across large regions can provide insight into dominant controls on runoff generation and how these controls vary based on climatic, biological, and geophysical setting. To make this leap, however, we need to combine knowledge of regional forcing skill, model parameter and physics assumptions, and hydrologic theory. If we can successfully do this, we also gain information on how well our current approximations of these dominant physical processes are represented in continental-scale models. In this study, we apply this diagnostic approach to a 5-year retrospective implementation of the WRF-Hydro community model configured for the U.S. National Weather Service's National Water Model (NWM). The NWM is a water prediction model in operations over the contiguous U.S. as of summer 2016, providing real-time estimates and forecasts out to 30 days of streamflow across 2.7 million stream reaches as well as distributed snowpack, soil moisture, and evapotranspiration at 1-km resolution. The WRF-Hydro system permits not only the standard simulation of vertical energy and water fluxes common in continental-scale models, but augments these processes with lateral redistribution of surface and subsurface water, simple groundwater dynamics, and channel routing. We evaluate 5 years of NLDAS-2 precipitation forcing and WRF-Hydro streamflow and evapotranspiration simulation across the contiguous U.S. at a range of spatial (gage, basin, ecoregion) and temporal (hourly, daily, monthly) scales and look for consistencies and inconsistencies in performance in terms of bias, timing, and extremes. Leveraging results from other CONUS-scale hydrologic evaluation studies, we translate our performance metrics into a matrix of likely dominant process controls and error sources (forcings, parameter estimates, and model physics). We test our hypotheses in a series of controlled model experiments on a subset of representative basins from distinct "problem" environments (Southeast U.S. Coastal Plain, Central and Coastal Texas, Northern Plains, and Arid Southwest). The results from these longer-term model diagnostics will inform future improvements in forcing bias correction, parameter calibration, and physics developments in the National Water Model.

  14. Practical device-independent quantum cryptography via entropy accumulation.

    PubMed

    Arnon-Friedman, Rotem; Dupuis, Frédéric; Fawzi, Omar; Renner, Renato; Vidick, Thomas

    2018-01-31

    Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed "entropy accumulation", which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.

  15. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    NASA Astrophysics Data System (ADS)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  16. Dynamical evolution of differentiated asteroid families

    NASA Astrophysics Data System (ADS)

    Martins-Filho, W. S.; Carvano, J.; Mothe-Diniz, T.; Roig, F.

    2014-10-01

    The project aims to study the dynamical evolution of a family of asteroids formed from a fully differentiated parent body, considering family members with different physical properties consistent with what is expected from the break up of a body formed by a metallic nucleus surrounded by a rocky mantle. Initially, we study the effects of variations in density, bond albedo, and thermal inertia in the semi-major axis drift caused by the Yarkovsky effect. The Yarkovsky effect is a non-conservative force caused by the thermal re-radiation of the solar radiation by an irregular body. In Solar System bodies, it is known to cause changes in the orbital motions (Peterson, 1976), eventually bringing asteroids into transport routes to near-Earth space, such as some mean motion resonances. We expressed the equations of variation of the semi-major axis directly in terms of physical properties (such as the mean motion, frequency of rotation, conductivity, thermal parameter, specific heat, obliquity and bond albedo). This development was based on the original formalism for the Yarkovsky effect (i.e., Bottke et al., 2006 and references therein). The derivation of above equations allowed us to closely study the variation of the semi-major axis individually for each physical parameter, clearly showing that the changes in semi-major axis for silicate bodies is twice or three times greater than for metal bodies. The next step was to calculate the orbital elements of a synthetic family after the break-up. That was accomplished assuming that the catastrophic disruption energy is given by the formalism described by Stewart and Leinhardt (2009) and assuming an isotropic distribution of velocities for the fragments of the nucleus and the mantle. Finally, the orbital evolution of the fragments is implemented using a simpletic integrator, and the result compared with the distribution of real asteroid families.

  17. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  18. Renewal models and coseismic stress transfer in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Falcone, Giuseppe; Karakostas, Vassilis; Murru, Maura; Papadimitriou, Eleftheria; Rhoades, David

    2013-07-01

    model interevent times and Coulomb static stress transfer on the rupture segments along the Corinth Gulf extension zone, a region with a wealth of observations on strong-earthquake recurrence behavior. From the available information on past seismic activity, we have identified eight segments without significant overlapping that are aligned along the southern boundary of the Corinth rift. We aim to test if strong earthquakes on these segments are characterized by some kind of time-predictable behavior, rather than by complete randomness. The rationale for time-predictable behavior is based on the characteristic earthquake hypothesis, the necessary ingredients of which are a known faulting geometry and slip rate. The tectonic loading rate is characterized by slip of 6 mm/yr on the westernmost fault segment, diminishing to 4 mm/yr on the easternmost segment, based on the most reliable geodetic data. In this study, we employ statistical and physical modeling to account for stress transfer among these fault segments. The statistical modeling is based on the definition of a probability density distribution of the interevent times for each segment. Both the Brownian Passage-Time (BPT) and Weibull distributions are tested. The time-dependent hazard rate thus obtained is then modified by the inclusion of a permanent physical effect due to the Coulomb static stress change caused by failure of neighboring faults since the latest characteristic earthquake on the fault of interest. The validity of the renewal model is assessed retrospectively, using the data of the last 300 years, by comparison with a plain time-independent Poisson model, by means of statistical tools including the Relative Operating Characteristic diagram, the R-score, the probability gain and the log-likelihood ratio. We treat the uncertainties in the parameters of each examined fault source, such as linear dimensions, depth of the fault center, focal mechanism, recurrence time, coseismic slip, and aperiodicity of the statistical distribution, by a Monte Carlo technique. The Monte Carlo samples for all these parameters are drawn from a uniform distribution within their uncertainty limits. We find that the BPT and the Weibull renewal models yield comparable results, and both of them perform significantly better than the Poisson hypothesis. No clear performance enhancement is achieved by the introduction of the Coulomb static stress change into the renewal model.

  19. Deriving movement properties and the effect of the environment from the Brownian bridge movement model in monkeys and birds.

    PubMed

    Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P

    2015-01-01

    The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.

  20. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  1. Analysing and correcting the differences between multi-source and multi-scale spatial remote sensing observations.

    PubMed

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.

  2. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    PubMed Central

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760

  3. A modified microdosimetric kinetic model for relative biological effectiveness calculation

    NASA Astrophysics Data System (ADS)

    Chen, Yizheng; Li, Junli; Li, Chunyan; Qiu, Rui; Wu, Zhen

    2018-01-01

    In the heavy ion therapy, not only the distribution of physical absorbed dose, but also the relative biological effectiveness (RBE) weighted dose needs to be taken into account. The microdosimetric kinetic model (MKM) can predict the RBE value of heavy ions with saturation-corrected dose-mean specific energy, which has been used in clinical treatment planning at the National Institute of Radiological Sciences. In the theoretical assumption of the MKM, the yield of the primary lesion is independent of the radiation quality, while the experimental data shows that DNA double strand break (DSB) yield, considered as the main primary lesion, depends on the LET of the particle. Besides, the β parameter of the MKM is constant with LET resulting from this assumption, which also differs from the experimental conclusion. In this study, a modified MKM was developed, named MMKM. Based on the experimental DSB yield of mammalian cells under the irradiation of ions with different LETs, a RBEDSB (RBE for the induction of DSB)-LET curve was fitted as the correction factor to modify the primary lesion yield in the MKM, and the variation of the primary lesion yield with LET is considered in the MMKM. Compared with the present the MKM, not only the α parameter of the MMKM for mono-energetic ions agree with the experimental data, but also the β parameter varies with LET and the variation trend of the experimental result can be reproduced on the whole. Then a spread-out Bragg peaks (SOBP) distribution of physical dose was simulated with Geant4 Monte Carlo code, and the biological and clinical dose distributions were calculated, under the irradiation of carbon ions. The results show that the distribution of clinical dose calculated with the MMKM is closed to the distribution with the MKM in the SOBP, while the discrepancy before and after the SOBP are both within 10%. Moreover, the MKM might overestimate the clinical dose at the distal end of the SOBP more than 5% because of its constant β value, while a minimal value of β is calculated with the MMKM at this position. Besides, the discrepancy of the averaged cell survival fraction in the SOBP calculated with the two models is more than 15% at the high dose level. The MMKM may provide a reference for the accurate calculation of the RBE value in heavy ion therapy.

  4. Estimation of interfacial heat transfer coefficient in inverse heat conduction problems based on artificial fish swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaowei; Li, Huiping; Li, Zhichao

    2018-04-01

    The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.

  5. Measurement and analysis of electron-neutral collision frequency in the calibrated cutoff probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, K. H.; Seo, B. H.; Kim, J. H.

    2016-03-15

    As collisions between electrons and neutral particles constitute one of the most representative physical phenomena in weakly ionized plasma, the electron-neutral (e-n) collision frequency is a very important plasma parameter as regards understanding the physics of this material. In this paper, we measured the e-n collision frequency in the plasma using a calibrated cutoff-probe. A highly accurate reactance spectrum of the plasma/cutoff-probe system, which is expected based on previous cutoff-probe circuit simulations [Kim et al., Appl. Phys. Lett. 99, 131502 (2011)], is obtained using the calibrated cutoff-probe method, and the e-n collision frequency is calculated based on the cutoff-probe circuitmore » model together with the high-frequency conductance model. The measured e-n collision frequency (by the calibrated cutoff-probe method) is compared and analyzed with that obtained using a Langmuir probe, with the latter being calculated from the measured electron-energy distribution functions, in wide range of gas pressure.« less

  6. Application of identified sensitive physical parameters in reducing the uncertainty of numerical simulation

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2016-04-01

    An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  7. QCD Precision Measurements and Structure Function Extraction at a High Statistics, High Energy Neutrino Scattering Experiment:. NuSOnG

    NASA Astrophysics Data System (ADS)

    Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.

    We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.

  8. Foundations for statistical-physical precipitation retrieval from passive microwave satellite measurements. I - Brightness-temperature properties of a time-dependent cloud-radiation model

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu

    1992-01-01

    The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.

  9. Radon decay products in realistic living rooms and their activity distributions in human respiratory system.

    PubMed

    Mohery, M; Abdallah, A M; Baz, S S; Al-Amoudi, Z M

    2014-12-01

    In this study, the individual activity concentrations of attached short-lived radon decay products ((218)Po, (214)Pb and (214)Po) in aerosol particles were measured in ten poorly ventilated realistic living rooms. Using standard methodologies, the samples were collected using a filter holder technique connected with alpha-spectrometric. The mean value of air activity concentration of these radionuclides was found to be 5.3±0.8, 4.5±0.5 and 3.9±0.4 Bq m(-3), respectively. Based on the physical properties of the attached decay products and physiological parameters of light work activity for an adult human male recommended by ICRP 66 and considering the parameters of activity size distribution (AMD = 0.25 μm and σ(g) = 2.5) given by NRC, the total and regional deposition fractions in each airway generation could be evaluated. Moreover, the total and regional equivalent doses in the human respiratory tract could be estimated. In addition, the surface activity distribution per generation is calculated for the bronchial region (BB) and the bronchiolar region (bb) of the respiratory system. The maximum values of these activities were found in the upper bronchial airway generations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. New shape models of asteroids reconstructed from sparse-in-time photometry

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna

    2015-08-01

    Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.

  11. A new approach to interpretation of heterogeneity of fluorescence decay in complex biological systems

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Jakub; Kierdaszuk, Borys

    2005-08-01

    Decays of tyrosine fluorescence in protein-ligand complexes are described by a model of continuous distribution of fluorescence lifetimes. Resulted analytical power-like decay function provides good fits to highly complex fluorescence kinetics. Moreover, this is a manifestation of so-called Tsallis q-exponential function, which is suitable for description of the systems with long-range interactions, memory effect, as well as with fluctuations of the characteristic lifetime of fluorescence. The proposed decay functions were applied to analysis of fluorescence decays of tyrosine in a protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli (the product of the deoD gene), free in aqueous solution and in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate). The power-like function provides new information about enzyme-ligand complex formation based on the physically justified heterogeneity parameter directly related to the lifetime distribution. A measure of the heterogeneity parameter in the enzyme systems is provided by a variance of fluorescence lifetime distribution. The possible number of deactivation channels and excited state mean lifetime can be easily derived without a priori knowledge of the complexity of studied system. Moreover, proposed model is simpler then traditional multi-exponential one, and better describes heterogeneous nature of studied systems.

  12. Global Aerosol Remote Sensing from MODIS

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kaufman, Yoram J.; Remer, Lorraine A.; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Martins, Jose V.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The physical characteristics, composition, abundance, spatial distribution and dynamics of global aerosols are still very poorly known, and new data from satellite sensors have long been awaited to improve current understanding and to give a boost to the effort in future climate predictions. The derivation of aerosol parameters from the MODerate resolution Imaging Spectro-radiometer (MODIS) sensors aboard the Earth Observing System (EOS) Terra and Aqua polar-orbiting satellites ushers in a new era in aerosol remote sensing from space. Terra and Aqua were launched on December 18, 1999 and May 4, 2002 respectively, with daytime equator crossing times of approximately 10:30 am and 1:30 pm respectively. Several aerosol parameters are retrieved at 10-km spatial resolution (level 2) from MODIS daytime data. The MODIS aerosol algorithm employs different approaches to retrieve parameters over land and ocean surfaces, because of the inherent differences in the solar spectral radiance interaction with these surfaces. The parameters retrieved include: aerosol optical thickness (AOT) at 0.47, 0.55 and 0.66 micron wavelengths over land, and at 0.47, 0.55, 0.66, 0.87, 1.2, 1.6, and 2.1 micron over ocean; Angstrom exponent over land and ocean; and effective radii, and the proportion of AOT contributed by the small mode aerosols over ocean. To ensure the quality of these parameters, a substantial part of the Terra-MODIS aerosol products were validated globally and regionally, based on cross correlation with corresponding parameters derived from ground-based measurements from AERONET (AErosol RObotic NETwork) sun photometers. Similar validation efforts are planned for the Aqua-MODIS aerosol products. The MODIS level 2 aerosol products are operationally aggregated to generate global daily, eight-day (weekly), and monthly products at one-degree spatial resolution (level 3). MODIS aerosol data are used for the detailed study of local, regional, and global aerosol concentration, distribution, and temporal dynamics, as well as for radiative forcing calculations. We show several examples of these results and comparisons with model output.

  13. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  14. Studies of the 3D surface roughness height

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avisane, Anita; Rudzitis, Janis; Kumermanis, Maris

    2013-12-16

    Nowadays nano-coatings occupy more and more significant place in technology. Innovative, functional coatings acquire new aspects from the point of view of modern technologies, considering the aggregate of physical properties that can be achieved manipulating in the production process with the properties of coatings’ surfaces on micro- and nano-level. Nano-coatings are applied on machine parts, friction surfaces, contacting parts, corrosion surfaces, transparent conducting films (TCF), etc. The equipment available at present for the production of transparent conducting oxide (TCO) coatings with highest quality is based on expensive indium tin oxide (ITO) material; therefore cheaper alternatives are being searched for. Onemore » such offered alternative is zink oxide (ZnO) nano-coatings. Evaluating the TCF physical and mechanical properties and in view of the new ISO standard (EN ISO 25178) on the introduction of surface texture (3D surface roughness) in the engineering calculations, it is necessary to examine the height of 3D surface roughness, which is one of the most significant roughness parameters. The given paper studies the average values of 3D surface roughness height and the most often applied distribution laws are as follows: the normal distribution and Rayleigh distribution. The 3D surface is simulated by a normal random field.« less

  15. Soil mechanics: breaking ground.

    PubMed

    Einav, Itai

    2007-12-15

    In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Surajit; Ladpli, Purim; Chang, Fu-Kuo

    Accurate interpretation of in-situ piezoelectric sensor signals is a challenging task. This article presents the development of a numerical compensation model based on physical insight to address the influence of structural loads on piezo-sensor signals. The model requires knowledge of in-situ strain and temperature distribution in a structure while acquiring sensor signals. The parameters of the numerical model are obtained using experiments on flat aluminum plate under uniaxial tensile loading. It is shown that the model parameters obtained experimentally can be used for different structures, and sensor layout. Furthermore, the combined effects of load and temperature on the piezo-sensor responsemore » are also investigated and it is observed that both of these factors have a coupled effect on the sensor signals. It is proposed to obtain compensation model parameters under a range of operating temperatures to address this coupling effect. An important outcome of this study is a new load monitoring concept using in-situ piezoelectric sensor signals to track changes in the load paths in a structure.« less

  17. Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System

    NASA Technical Reports Server (NTRS)

    Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.

    2013-01-01

    Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.

  18. Controls on the variability of net infiltration to desert sandstone

    USGS Publications Warehouse

    Heilweil, Victor M.; McKinney, Tim S.; Zhdanov, Michael S.; Watt, Dennis E.

    2007-01-01

    As populations grow in arid climates and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration becomes critically important for accurately inventorying water resources and mapping contamination vulnerability. This paper presents a conceptual model of net infiltration to desert sandstone and then develops an empirical equation for its spatial quantification at the watershed scale using linear least squares inversion methods for evaluating controlling parameters (independent variables) based on estimated net infiltration rates (dependent variables). Net infiltration rates used for this regression analysis were calculated from environmental tracers in boreholes and more than 3000 linear meters of vadose zone excavations in an upland basin in southwestern Utah underlain by Navajo sandstone. Soil coarseness, distance to upgradient outcrop, and topographic slope were shown to be the primary physical parameters controlling the spatial variability of net infiltration. Although the method should be transferable to other desert sandstone settings for determining the relative spatial distribution of net infiltration, further study is needed to evaluate the effects of other potential parameters such as slope aspect, outcrop parameters, and climate on absolute net infiltration rates.

  19. Influence of the bracket on bonding and physical behavior of orthodontic resin cements.

    PubMed

    Bolaños-Carmona, Victoria; Zein, Bilal; Menéndez-Núñez, Mario; Sánchez-Sánchez, Purificación; Ceballos-García, Laura; González-López, Santiago

    2015-01-01

    The aim of the study is to determine the influence of the type of bracket, on bond strength, microhardness and conversion degree (CD) of four resin orthodontic cements. Micro-tensile bond strength (µTBS) test between the bracket base and the cement was carried out on glass-hour-shaped specimens (n=20). Vickers Hardness Number (VHN) and micro-Raman spectra were recorded in situ under the bracket base. Weibull distribution, ANOVA and non-parametric test were applied for data analysis (p<0.05). The highest values of ή as well as the β Weibull parameter were obtained for metallic brackets with Transbond™ plastic brackets with the self-curing cement showing the worst performance. The CD was from 80% to 62.5%.

  20. A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks

    PubMed Central

    Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.

    2017-01-01

    This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023

  1. A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.

    PubMed

    Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O

    2017-05-27

    This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.

  2. Radiation Parameters of High Dose Rate Iridium -192 Sources

    NASA Astrophysics Data System (ADS)

    Podgorsak, Matthew B.

    A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.

  3. Determination of material distribution in heading process of small bimetallic bar

    NASA Astrophysics Data System (ADS)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  4. Time scale variation of NV resonance line profiles of HD203064

    NASA Astrophysics Data System (ADS)

    Strantzalis, A.

    2012-01-01

    Hot emission star, such as Be and Oe, present many spectral lines with very complex and peculiar profiles. Therefore, we cannot find a classical distribution to fit theoretically those physical line profiles. So, many physical parameters of the regions, where spectral lines are created, are difficult to estimate. Here, in this poster paper we study the UV NV (λλ 1238.821, 1242.804 A) resonance lines of the Be star HD203064 at three different dates. We using the Gauss-Rotation model, that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs). Our purpose is to calculate the values of a group of physical parameters as the apparent rotational, radial, and random velocities of the thermal motions of the ions. Also the Full Width at Half Maximum (FWHM) and the column density, as well as the absorbed energy of the independent regions of matter, which produce the main and the satellite components of the studied spectral lines. In addition, we determine the time scale variations of the above physical parameters.

  5. Time scale variation of MgII resonance lines of HD 41335 in UV region

    NASA Astrophysics Data System (ADS)

    Nikolaou, I.

    2012-01-01

    It is known that hot emission stars (Be and Oe) present peculiar and very complex spectral line profiles. Due to these perplexed lines that appear, it is difficult to actually fit a classical distribution to those physical profiles. Therefore many physical parameters of the regions, where these lines are created, can not be determined. In this paper, we study the Ultraviolet (UV) MgII (?? 2795.523, 2802.698 A) resonance lines of the HD 41335 star, at three different periods. Considering that these profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs), we use the Gauss-Rotation model (GR-model). From this analysis we can estimate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM), the column density and the absorbed energy of the independent regions of matter, which produce the main and the satellite components of the studied spectral lines. Eventually, we calculate the time scale variations of the above physical parameters.

  6. Neutron diffraction measurements and micromechanical modelling of temperature-dependent variations in TATB lattice parameters

    DOE PAGES

    Yeager, John D.; Luscher, Darby J.; Vogel, Sven C.; ...

    2016-02-02

    Triaminotrinitrobenzene (TATB) is a highly anisotropic molecular crystal used in several plastic-bonded explosive (PBX) formulations. TATB-based explosives exhibit irreversible volume expansion (“ratchet growth”) when thermally cycled. A theoretical understanding of the relationship between anisotropy of the crystal, crystal orientation distribution (texture) of polycrystalline aggregates, and the intergranular interactions leading to this irreversible growth is necessary to accurately develop physics-based predictive models for TATB-based PBXs under various thermal environments. In this work, TATB lattice parameters were measured using neutron diffraction during thermal cycling of loose powder and a pressed pellet. The measured lattice parameters help clarify conflicting reports in the literaturemore » as these new results are more consistent with one set of previous results than another. The lattice parameters of pressed TATB were also measured as a function of temperature, showing some differences from the powder. This data is used along with anisotropic single-crystal stiffness moduli reported in the literature to model the nominal stresses associated with intergranular constraints during thermal expansion. The texture of both specimens were characterized and the pressed pellet exhibits preferential orientation of (001) poles along the pressing direction, whereas no preferred orientation was found for the loose powder. Lastly, thermal strains for single-crystal TATB computed from lattice parameter data for the powder is input to a self-consistent micromechanical model, which predicts the lattice parameters of the constrained TATB crystals within the pellet. The agreement of these model results with the diffraction data obtained from the pellet is discussed along with future directions of research.« less

  7. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  8. Study of the physical properties of Ge-S-Ga glassy alloy

    NASA Astrophysics Data System (ADS)

    Rana, Anjli; Sharma, Raman

    2018-05-01

    In the present work, we have studied the effect of Ga doping on the physical properties of Ge20S80-xGax glassy alloy. The basic physical parameters which have important role in determining the structure and strength of the material viz. average coordination number, lone-pair electrons, mean bond energy, glass transition temperature, electro negativity, probabilities for bond distribution and cohesive energy have been computed theoretically for Ge-S-Ga glassy alloy. Here, the glass transition temperature and mean bond energy have been investigated using the Tichy-Ticha approach. The cohesive energy has been calculated by using chemical bond approach (CBA) method. It has been found that while average coordination number increases, all the other parameters decrease with the increase in Ga content in Ge-S-Ga system.

  9. Comparison of Radiation Pressure Perturbations on Rocket Bodies and Debris at Geosynchronous Earth Orbit

    DTIC Science & Technology

    2014-09-01

    has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a

  10. The geomagnetically trapped radiation environment: A radiological point of view

    NASA Technical Reports Server (NTRS)

    Holly, F. E.

    1972-01-01

    The regions of naturally occurring, geomagnetically trapped radiation are briefly reviewed in terms of physical parameters such as; particle types, fluxes, spectrums, and spatial distributions. The major emphasis is placed upon a description of this environment in terms of the radiobiologically relevant parameters of absorbed dose and dose-rate and a discussion of the radiological implications in terms of the possible impact on space vehicle design and mission planning.

  11. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  12. The notion of snow grain shape: Ambiguous definitions, retrievalfrom tomography and implications on remote sensing

    NASA Astrophysics Data System (ADS)

    Krol, Q. E.; Loewe, H.

    2016-12-01

    Grain shape is known to influence the effective physical properties of snow and therefore included in the international classification of seasonal snow. Accordingly, snowpack models account for phenomenological shape parameters (sphericity, dendricity) to capture shape variations. These parameters are however difficult to validate due to the lack of clear-cut definitions from the 3D microstucture and insufficient links to physical properties. While the definition of traditional shape was tailored to the requirements of observers, a more objective definition should be tailored to the requirements of physical properties, by analyzing geometrical (shape) corrections in existing theoretical formulations directly. To this end we revisited the autocorrelation function (ACF) and the chord length distribution (CLD) of snow. Both functions capture size distributions of the microstructure, can be calculated from X-ray tomography and are related to various physical properties. Both functions involve the optical equivalent diameter as dominant quantity, however the respective higher-order geometrical correction differ. We have analyzed these corrections, namely interfacial curvatures for the ACF and the second moment for the CLD, using an existing data set of 165 tomography samples. To unify the notion of shape, we derived various statistical relations between the length scales. Our analysis bears three key practical implications. First, we derived a significantly improved relation between the exponential correlation length and the optical diameter by taking curvatures into account. This adds to the understanding of linking "microwave grain size" and "optical grain size" of snow for remote sensing. Second, we retrieve the optical shape parameter (commonly referred to as B) from tomography images via the moment of the CLD. Third, shape variations seen by observers do not necessarily correspond to shape variations probed by physical properties.

  13. An approach for modelling snowcover ablation and snowmelt runoff in cold region environments

    NASA Astrophysics Data System (ADS)

    Dornes, Pablo Fernando

    Reliable hydrological model simulations are the result of numerous complex interactions among hydrological inputs, landscape properties, and initial conditions. Determination of the effects of these factors is one of the main challenges in hydrological modelling. This situation becomes even more difficult in cold regions due to the ungauged nature of subarctic and arctic environments. This research work is an attempt to apply a new approach for modelling snowcover ablation and snowmelt runoff in complex subarctic environments with limited data while retaining integrity in the process representations. The modelling strategy is based on the incorporation of both detailed process understanding and inputs along with information gained from observations of basin-wide streamflow phenomenon; essentially a combination of deductive and inductive approaches. The study was conducted in the Wolf Creek Research Basin, Yukon Territory, using three models, a small-scale physically based hydrological model, a land surface scheme, and a land surface hydrological model. The spatial representation was based on previous research studies and observations, and was accomplished by incorporating landscape units, defined according to topography and vegetation, as the spatial model elements. Comparisons between distributed and aggregated modelling approaches showed that simulations incorporating distributed initial snowcover and corrected solar radiation were able to properly simulate snowcover ablation and snowmelt runoff whereas the aggregated modelling approaches were unable to represent the differential snowmelt rates and complex snowmelt runoff dynamics. Similarly, the inclusion of spatially distributed information in a land surface scheme clearly improved simulations of snowcover ablation. Application of the same modelling approach at a larger scale using the same landscape based parameterisation showed satisfactory results in simulating snowcover ablation and snowmelt runoff with minimal calibration. Verification of this approach in an arctic basin illustrated that landscape based parameters are a feasible regionalisation framework for distributed and physically based models. In summary, the proposed modelling philosophy, based on the combination of an inductive and deductive reasoning, is a suitable strategy for reliable predictions of snowcover ablation and snowmelt runoff in cold regions and complex environments.

  14. Modification of Gaussian mixture models for data classification in high energy physics

    NASA Astrophysics Data System (ADS)

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  15. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability.

    PubMed

    van Albada, S J; Robinson, P A

    2007-04-15

    Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.

  16. Etalon (standard) for surface potential distribution produced by electric activity of the heart.

    PubMed

    Szathmáry, V; Ruttkay-Nedecký, I

    1981-01-01

    The authors submit etalon (standard) equipotential maps as an aid in the evaluation of maps of surface potential distributions in living subjects. They were obtained by measuring potentials on the surface of an electrolytic tank shaped like the thorax. The individual etalon maps were determined in such a way that the parameters of the physical dipole forming the source of the electric field in the tank corresponded to the mean vectorcardiographic parameters measured in a healthy population sample. The technique also allows a quantitative estimate of the degree of non-dipolarity of the heart as the source of the electric field.

  17. Distributed dual-parameter optical fiber sensor based on cascaded microfiber Fabry-Pérot interferometers

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Luo, Yiyang; Zhang, Wei; Liu, Deming; Sun, Qizhen

    2017-04-01

    We propose and demonstrate a distributed fiber sensor based on cascaded microfiber Fabry-Perot interferometers (MFPI) for simultaneous refractive index (SRI) and temperature measurement. By employing MFPI which is fabricated by taper-drawing the center of a uniform fiber Bragg grating (FBG) on standard fiber into a section of microfiber, dual parameters including SRI and temperature can be detected through demodulating the reflection spectrum of the MFPI. Further, wavelength-division-multiplexing (WDM) is applied to realize distributed dual-parameter fiber sensor by using cascaded MFPIs with different Bragg wavelengths. A prototype sensor system with 5 cascaded MFPIs is constructed to experimentally demonstrate the sensing performance.

  18. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  19. Chemically reactive species in squeezed flow through modified Fourier's and Fick's laws

    NASA Astrophysics Data System (ADS)

    Farooq, M.; Ahmad, S.; Javed, M.; Anjum, Aisha

    2018-02-01

    The squeezing flow of a Newtonian fluid with variable viscosity over a stretchable sheet embedded in Darcy porous medium is addressed. Cattaneo-Christov double diffusion models are adopted to disclose the salient features of heat and mass transport via variable thermal conductivity and variable mass diffusivity instead of conventional Fourier's and Fick's laws. Further, the concept of heat generation/absorption coefficient and first-order chemical reaction are also imposed to illustrate the characteristics of heat and mass transfer. Highly nonlinear computations are developed in dimensionless form and analyzed via the homotopic technique. The variation of flow parameters on velocity, concentration, and temperature distributions are sketched and disclosed physically. The results found that both concentration and temperature distributions decay for higher solutal and thermal relaxation parameters, respectively. Moreover, a higher chemical reaction parameter results in the reduction of the concentration field whereas the temperature profile enhances for a higher heat generation/absorption parameter.

  20. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  1. A new approach to identify the sensitivity and importance of physical parameters combination within numerical models using the Lund-Potsdam-Jena (LPJ) model as an example

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2017-05-01

    An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  2. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  3. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  4. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  5. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  6. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE PAGES

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  7. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  8. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  9. Physics of vascular brachytherapy.

    PubMed

    Jani, S K

    1999-08-01

    Basic physics plays an important role in understanding the clinical utility of radioisotopes in brachytherapy. Vascular brachytherapy is a very unique application of localized radiation in that dose levels very close to the source are employed to treat tissues within the arterial wall. This article covers basic physics of radioactivity and differentiates between beta and gamma radiations. Physical parameters such as activity, half-life, exposure and absorbed dose have been explained. Finally, the dose distribution around a point source and a linear source is described. The principles of basic physics are likely to play an important role in shaping the emerging technology and its application in vascular brachytherapy.

  10. A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range

    PubMed Central

    Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.

    2017-01-01

    Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268

  11. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  12. Application of the Junge- and Pankow-equation for estimating indoor gas/particle distribution and exposure to SVOCs

    NASA Astrophysics Data System (ADS)

    Salthammer, Tunga; Schripp, Tobias

    2015-04-01

    In the indoor environment, distribution and dynamics of an organic compound between gas phase, particle phase and settled dust must be known for estimating human exposure. This, however, requires a detailed understanding of the environmentally important compound parameters, their interrelation and of the algorithms for calculating partitioning coefficients. The parameters of major concern are: (I) saturation vapor pressure (PS) (of the subcooled liquid); (II) Henry's law constant (H); (III) octanol/water partition coefficient (KOW); (IV) octanol/air partition coefficient (KOA); (V) air/water partition coefficient (KAW) and (VI) settled dust properties like density and organic content. For most of the relevant compounds reliable experimental data are not available and calculated gas/particle distributions can widely differ due to the uncertainty in predicted Ps and KOA values. This is not a big problem if the target compound is of low (<10-6 Pa) or high (>10-2 Pa) volatility, but in the intermediate region even small changes in Ps or KOA will have a strong impact on the result. Moreover, the related physical processes might bear large uncertainties. The KOA value can only be used for particle absorption from the gas phase if the organic portion of the particle or dust is high. The Junge- and Pankow-equation for calculating the gas/particle distribution coefficient KP do not consider the physical and chemical properties of the particle surface area. It is demonstrated by error propagation theory and Monte-Carlo simulations that parameter uncertainties from estimation methods for molecular properties and variations of indoor conditions might strongly influence the calculated distribution behavior of compounds in the indoor environment.

  13. Material parameter computation for multi-layered vocal fold models.

    PubMed

    Schmidt, Bastian; Stingl, Michael; Leugering, Günter; Berry, David A; Döllinger, Michael

    2011-04-01

    Today, the prevention and treatment of voice disorders is an ever-increasing health concern. Since many occupations rely on verbal communication, vocal health is necessary just to maintain one's livelihood. Commonly applied models to study vocal fold vibrations and air flow distributions are self sustained physical models of the larynx composed of artificial silicone vocal folds. Choosing appropriate mechanical parameters for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In the present work, a combination of experimental and numerical approaches to compute material parameters for synthetic vocal fold models is presented. The material parameters are derived from deformation behaviors of excised human larynges. The resulting deformations are used as reference displacements for a tracking functional to be optimized. Material optimization was applied to three-dimensional vocal fold models based on isotropic and transverse-isotropic material laws, considering both a layered model with homogeneous material properties on each layer and an inhomogeneous model. The best results exhibited a transversal-isotropic inhomogeneous (i.e., not producible) model. For the homogeneous model (three layers), the transversal-isotropic material parameters were also computed for each layer yielding deformations similar to the measured human vocal fold deformations.

  14. POTENTIAL OF BIOLOGICAL MONITORING SYSTEMS TO DETECT TOXICITY IN A FINISHED MATRIX

    EPA Science Inventory

    Distribution systems of the U.S. are vulnerable to natural and anthropogenic factors affecting quality for use as drinking water. Important factors include physical parameters such as increased turbidity, ecological cycles such as algal blooms, and episodic contamination events ...

  15. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  16. Simulation of radar reflectivity and surface measurements of rainfall

    NASA Technical Reports Server (NTRS)

    Chandrasekar, V.; Bringi, V. N.

    1987-01-01

    Raindrop size distributions (RSDs) are often estimated using surface raindrop sampling devices (e.g., disdrometers) or optical array (2D-PMS) probes. A number of authors have used these measured distributions to compute certain higher-order RSD moments that correspond to radar reflectivity, attenuation, optical extinction, etc. Scatter plots of these RSD moments versus disdrometer-measured rainrates are then used to deduce physical relationships between radar reflectivity, attenuation, etc., which are measured by independent instruments (e.g., radar), and rainrate. In this paper RSDs of the gamma form as well as radar reflectivity (via time series simulation) are simulated to study the correlation structure of radar estimates versus rainrate as opposed to RSD moment estimates versus rainrate. The parameters N0, D0 and m of a gamma distribution are varied over the range normally found in rainfall, as well as varying the device sampling volume. The simulations are used to explain some possible features related to discrepancies which can arise when radar rainfall measurements are compared with surface or aircraft-based sampling devices.

  17. Influence of technological factors on characteristics of hybrid fluid-film bearings

    NASA Astrophysics Data System (ADS)

    Koltsov, A.; Prosekova, A.; Rodichev, A.; Savin, L.

    2017-08-01

    The influence of the parameters of micro- and macrounevenness on the characteristics of a hybrid bearing with slotted throttling is considered in the present paper. The quantitative assumptions of calculation of pressure distribution, load capacity, lubricant flow rate and power loss due to friction in a radial hybrid bearing with slotted throttling are taken into account, considering the shape, dimensions and roughness of the support surfaces inaccuracies. Numerical simulation of processes in the lubricating layer is based on the finite-difference solution of the Reynolds equation using an uneven orthogonal computational grid with adaptive condensation. The results of computational and physical experiments are presented.

  18. [Cluster analysis applicability to fitness evaluation of cosmonauts on long-term missions of the International space station].

    PubMed

    Egorov, A D; Stepantsov, V I; Nosovskiĭ, A M; Shipov, A A

    2009-01-01

    Cluster analysis was applied to evaluate locomotion training (running and running intermingled with walking) of 13 cosmonauts on long-term ISS missions by the parameters of duration (min), distance (m) and intensity (km/h). Based on the results of analyses, the cosmonauts were distributed into three steady groups of 2, 5 and 6 persons. Distance and speed showed a statistical rise (p < 0.03) from group 1 to group 3. Duration of physical locomotion training was not statistically different in the groups (p = 0.125). Therefore, cluster analysis is an adequate method of evaluating fitness of cosmonauts on long-term missions.

  19. Derived distribution of floods based on the concept of partial area coverage with a climatic appeal

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro

    2000-02-01

    A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.

  20. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  1. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  2. Grouping of Bulgarian wines according to grape variety by using statistical methods

    NASA Astrophysics Data System (ADS)

    Milev, M.; Nikolova, Kr.; Ivanova, Ir.; Minkova, St.; Evtimov, T.; Krustev, St.

    2017-12-01

    68 different types of Bulgarian wines were studied in accordance with 9 optical parameters as follows: color parameters in XYZ and SIE Lab color systems, lightness, Hue angle, chroma, fluorescence intensity and emission wavelength. The main objective of this research is using hierarchical cluster analysis to evaluate the similarity and the distance between examined different types of Bulgarian wines and their grouping based on physical parameters. We have found that wines are grouped in clusters on the base of the degree of identity between them. There are two main clusters each one with two subclusters. The first one contains white wines and Sira, the second contains red wines and rose. The results from cluster analysis are presented graphically by a dendrogram. The other statistical technique used is factor analysis performed by the Method of Principal Components (PCA). The aim is to reduce the large number of variables to a few factors by grouping the correlated variables into one factor and subdividing the noncorrelated variables into different factors. Moreover the factor analysis provided the possibility to determine the parameters with the greatest influence over the distribution of samples in different clusters. In our study after the rotation of the factors with Varimax method the parameters were combined into two factors, which explain about 80 % of the total variation. The first one explains the 61.49% and correlates with color characteristics, the second one explains 18.34% from the variation and correlates with the parameters connected with fluorescence spectroscopy.

  3. Relationship between suicide and myocardial infarction with regard to changing physical environmental conditions

    NASA Astrophysics Data System (ADS)

    Stoupel, Eliahu; Abramson, Eugeny; Sulkes, Jaqueline; Martfel, Joseph; Stein, Nechama; Handelman, Meir; Shimshoni, Michael; Zadka, Pnina; Gabbay, Uri

    1995-12-01

    In recent years, the possible association of changes in mortality from cardiovascular disease and myocardial infarction (MI) and deaths related to violence and the suicide rate has been repeatedly discussed. This study examined the relationship between cosmic physical changes (solar, geomagnetic and other space activity parameters) and changes in the total number of in-hospital and MI-related deaths and deaths from suicide to determine if a relationship exists between the distribution of total and MI-related deaths with suicide over time; some differences in the serotonergic mechanisms involved in the pathogenesis of MI and suicide were also taken into account. All suicides ( n=2359) registered in the State of Israel from 1981 to 1989 (108 months) were analysed and compared with the total number of deaths ( n=15601) and deaths from MI ( n=1573) in a large university hospital over 180 months (1974 1989). The following were the main features of the Results. (1) Monthly suicide rate was correlated with space proton flux ( r=0.42, P=0.0001) and with geomagnetic activity ( r=-0.22, P=0.03). (2) Total hospital and MI-related deaths were correlated with solar activity parameters ( r=0.35, P<0.001) and radiowave propagation ( r=0.52-0.44, P<0.001), an with proton flux ( r=-0.3 to -0.26, P<0.01). (3) Monthly suicide distribution over 108 months was correlated with MI ( r=-0.33, P=0.0005) and total hospital mortality ( r=-0.22, P=0.024). (4) Gender differences were prominent. We conclude that the monthly distributions of suicides and deaths from MI are adversely related to many environmental physical parameters and negatively correlated with each other.

  4. Hydrodynamic modeling of juvenile mussel dispersal in a large river: The potential effects of bed shear stress and other parameters

    USGS Publications Warehouse

    Daraio, J.A.; Weber, L.J.; Newton, T.J.

    2010-01-01

    Because unionid mussels have a parasitic larval stage, they are able to disperse upstream and downstream as larvae while attached to their host fish and with flow as juveniles after excystment from the host. Understanding unionid population ecology requires knowledge of the processes that affect juvenile dispersal prior to establishment. We examined presettlement (transport and dispersion with flow) and early postsettlement (bed shear stress) hydraulic processes as negative censoring mechanisms. Our approach was to model dispersal using particle tracking through a 3-dimensional flow field output from hydrodynamic models of a reach of the Upper Mississippi River. We tested the potential effects of bed shear stress (??b) at 5 flow rates on juvenile mussel dispersal and quantified the magnitude of these effects as a function of flow rate. We explored the reach-scale relationships of Froude number (Fr), water depth (H), local bed slope (S), and unit stream power (QS) with the likelihood of juvenile settling (??). We ran multiple dispersal simulations at each flow rate to estimate ??, the parameter of a Poisson distribution, from the number of juveniles settling in each grid cell, and calculated dispersal distances. Virtual juveniles that settled in areas of the river where b > critical shear stress (c) were resuspended in the flow and transported further downstream, so we ran simulations at 3 different conditions for ??c (??c = ??? no resuspension, 0.1, and 0.05 N/m2). Differences in virtual juvenile dispersal distance were significantly dependent upon c and flow rate, and effects of b on settling distribution were dependent upon c. Most simulations resulted in positive correlations between ?? and ??b, results suggesting that during early postsettlement, ??b might be the primary determinant of juvenile settling distribution. Negative correlations between ?? and ??b occurred in some simulations, a result suggesting that physical or biological presettlement processes might determine juvenile settling distributions. Field data are needed to test these hypotheses. Results support the idea that flow patterns and b can act as negative censoring mechanisms controlling settling distributions. Furthermore, a river reach probably has a quantifiable threshold range of flow rates. Above the upper threshold, ??b probably is the primary determinant of juvenile settling distribution. Relationships of ?? with H, Fr, S, and QS were relatively weak. Important physical processes that affect dispersal probably are not captured by approximations based on large-scale hydraulic parameters, such as Fr and H. ?? 2010 The North American Benthological Society.

  5. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  6. Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall

    NASA Technical Reports Server (NTRS)

    Clayton, D. D.

    1984-01-01

    Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.

  7. A hydroclimatological approach to predicting regional landslide probability using Landlab

    NASA Astrophysics Data System (ADS)

    Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.

    2018-02-01

    We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.

  8. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less

  9. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  10. A micromechanical interpretation of the temperature dependence of Beremin model parameters for french RPV steel

    NASA Astrophysics Data System (ADS)

    Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier

    2010-11-01

    Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.

  11. A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region

    NASA Technical Reports Server (NTRS)

    Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.

    2016-01-01

    The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.

  12. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  13. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  14. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  15. Incorporation of nonlinear thermorheological complexity into the phenomenologies of structural relaxation.

    PubMed

    Hodge, Ian M

    2005-09-22

    A distribution of activation energies is introduced into the nonlinear Adam-Gibbs ("Hodge-Scherer") phenomenology for structural relaxation. The resulting dependencies of the stretched exponential beta parameter on thermodynamic temperature and fictive temperature (nonlinear thermorheological complexity) are derived. No additional adjustable parameters are introduced, and contact is made with the predictions of the random first-order transition theory of aging of Lubchenko and Wolynes [J. Chem. Physics121, 2852 (2004)].

  16. A physically based analytical model of flood frequency curves

    NASA Astrophysics Data System (ADS)

    Basso, S.; Schirmer, M.; Botter, G.

    2016-09-01

    Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.

  17. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  18. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  19. Generalization of symmetric α-stable Lévy distributions for q >1

    NASA Astrophysics Data System (ADS)

    Umarov, Sabir; Tsallis, Constantino; Gell-Mann, Murray; Steinberg, Stanly

    2010-03-01

    The α-stable distributions introduced by Lévy play an important role in probabilistic theoretical studies and their various applications, e.g., in statistical physics, life sciences, and economics. In the present paper we study sequences of long-range dependent random variables whose distributions have asymptotic power-law decay, and which are called (q,α)-stable distributions. These sequences are generalizations of independent and identically distributed α-stable distributions and have not been previously studied. Long-range dependent (q,α)-stable distributions might arise in the description of anomalous processes in nonextensive statistical mechanics, cell biology, finance. The parameter q controls dependence. If q =1 then they are classical independent and identically distributed with α-stable Lévy distributions. In the present paper we establish basic properties of (q,α)-stable distributions and generalize the result of Umarov et al. [Milan J. Math. 76, 307 (2008)], where the particular case α =2,qɛ[1,3) was considered, to the whole range of stability and nonextensivity parameters α ɛ(0,2] and q ɛ[1,3), respectively. We also discuss possible further extensions of the results that we obtain and formulate some conjectures.

  20. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  1. Walkability parameters, active transportation and objective physical activity: moderating and mediating effects of motor vehicle ownership in a cross-sectional study

    PubMed Central

    2012-01-01

    Background Neighborhood walkability has been associated with physical activity in several studies. However, as environmental correlates of physical activity may be context specific, walkability parameters need to be investigated separately in various countries and contexts. Furthermore, the mechanisms by which walkability affects physical activity have been less investigated. Based on previous research, we hypothesized that vehicle ownership is a potential mediator. We investigated the associations between walkability parameters and physical activity, and the mediating and moderating effects of vehicle ownership on these associations in a large sample of Swedish adults. Methods Residential density, street connectivity and land use mix were assessed within polygon-based network buffers (using Geographic Information Systems) for 2,178 men and women. Time spent in moderate to vigorous physical activity was assessed by accelerometers, and walking and cycling for transportation were assessed by the International Physical Activity Questionnaire. Associations were examined by linear regression and adjusted for socio-demographic characteristics. The product of coefficients approach was used to investigate the mediating effect of vehicle ownership. Results Residential density and land use mix, but not street connectivity, were significantly associated with time spent in moderate to vigorous physical activity and walking for transportation. Cycling for transportation was not associated with any of the walkability parameters. Vehicle ownership mediated a significant proportion of the association between the walkability parameters and physical activity outcomes. For residential density, vehicle ownership mediated 25% of the association with moderate to vigorous physical activity and 20% of the association with the amount of walking for transportation. For land use mix, the corresponding proportions were 34% and 14%. Vehicle ownership did not moderate any of the associations between the walkability parameters and physical activity outcomes. Conclusions Residential density and land use mix were associated with time spent in moderate to vigorous physical activity and walking for transportation. Vehicle ownership was a mediator but not a moderator of these associations. The present findings may be useful for policy makers and city planners when designing neighborhoods that promote physical activity. PMID:23035633

  2. Walkability parameters, active transportation and objective physical activity: moderating and mediating effects of motor vehicle ownership in a cross-sectional study.

    PubMed

    Eriksson, Ulf; Arvidsson, Daniel; Gebel, Klaus; Ohlsson, Henrik; Sundquist, Kristina

    2012-10-05

    Neighborhood walkability has been associated with physical activity in several studies. However, as environmental correlates of physical activity may be context specific, walkability parameters need to be investigated separately in various countries and contexts. Furthermore, the mechanisms by which walkability affects physical activity have been less investigated. Based on previous research, we hypothesized that vehicle ownership is a potential mediator. We investigated the associations between walkability parameters and physical activity, and the mediating and moderating effects of vehicle ownership on these associations in a large sample of Swedish adults. Residential density, street connectivity and land use mix were assessed within polygon-based network buffers (using Geographic Information Systems) for 2,178 men and women. Time spent in moderate to vigorous physical activity was assessed by accelerometers, and walking and cycling for transportation were assessed by the International Physical Activity Questionnaire. Associations were examined by linear regression and adjusted for socio-demographic characteristics. The product of coefficients approach was used to investigate the mediating effect of vehicle ownership. Residential density and land use mix, but not street connectivity, were significantly associated with time spent in moderate to vigorous physical activity and walking for transportation. Cycling for transportation was not associated with any of the walkability parameters. Vehicle ownership mediated a significant proportion of the association between the walkability parameters and physical activity outcomes. For residential density, vehicle ownership mediated 25% of the association with moderate to vigorous physical activity and 20% of the association with the amount of walking for transportation. For land use mix, the corresponding proportions were 34% and 14%. Vehicle ownership did not moderate any of the associations between the walkability parameters and physical activity outcomes. Residential density and land use mix were associated with time spent in moderate to vigorous physical activity and walking for transportation. Vehicle ownership was a mediator but not a moderator of these associations. The present findings may be useful for policy makers and city planners when designing neighborhoods that promote physical activity.

  3. Multidrug efflux transporter activity in sea urchin embryos:Does localization provide a diffusive advantage?

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima; Cole, Bryan; Hamdoun, Amro; Epel, David

    2008-03-01

    Experiments have shown upregulation of multidrug efflux transporter activity approximately 30 min after fertilization in the sea urchin embryo [1]. These ATP-hydrolyzing transporter proteins pump moderately hydrophobic molecules out of the cell and represent the cell's first line of defense againstexogenous toxins. It has also been shown that transporters are moved in vesicles along microfilaments and localized to tips of microvilli prior to activation. We have constructed a geometrically realistic model of the embryo, including microvilli, to explore the functional role of this localization in the efficient elimination of toxins from the standpoint of diffusion. We compute diffusion of toxins in extracellular, membrane and intracellular spaces coupled with transporter activity, using experimentally derived values for physical parameters. For transporters uniformly distributed along microvilli and tip-localized transporters we compare regions in parameter space where each distribution provides diffusive advantage, and comment on the physically expected conditions. [1] A. M. Hamdoun, G. N. Cherr, T. A. Roepke and D. Epel, Developmental Biology 276 452 (2004).

  4. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.

  5. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  6. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    NASA Astrophysics Data System (ADS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  7. Surface characteristics modeling and performance evaluation of urban building materials using LiDAR data.

    PubMed

    Li, Xiaolu; Liang, Yu

    2015-05-20

    Analysis of light detection and ranging (LiDAR) intensity data to extract surface features is of great interest in remote sensing research. One potential application of LiDAR intensity data is target classification. A new bidirectional reflectance distribution function (BRDF) model is derived for target characterization of rough and smooth surfaces. Based on the geometry of our coaxial full-waveform LiDAR system, the integration method is improved through coordinate transformation to establish the relationship between the BRDF model and intensity data of LiDAR. A series of experiments using typical urban building materials are implemented to validate the proposed BRDF model and integration method. The fitting results show that three parameters extracted from the proposed BRDF model can distinguish the urban building materials from perspectives of roughness, specular reflectance, and diffuse reflectance. A comprehensive analysis of these parameters will help characterize surface features in a physically rigorous manner.

  8. Acoustic parameters inversion and sediment properties in the Yellow River reservoir

    NASA Astrophysics Data System (ADS)

    Li, Chang-Zheng; Yang, Yong; Wang, Rui; Yan, Xiao-Fei

    2018-03-01

    The physical properties of silt in river reservoirs are important to river dynamics. Unfortunately, traditional techniques yield insufficient data. Based on porous media acoustic theory, we invert the acoustic parameters for the top river-bottom sediments. An explicit form of the acoustic reflection coefficient at the water-sediment interface is derived based on Biot's theory. The choice of parameters in the Biot model is discussed and the relation between acoustic and geological parameters is studied, including that between the reflection coefficient and porosity and the attenuation coefficient and permeability. The attenuation coefficient of the sound wave in the sediments is obtained by analyzing the shift of the signal frequency. The acoustic reflection coefficient at the water-sediment interface is extracted from the sonar signal. Thus, an inversion method of the physical parameters of the riverbottom surface sediments is proposed. The results of an experiment at the Sanmenxia reservoir suggest that the estimated grain size is close to the actual data. This demonstrates the ability of the proposed method to determine the physical parameters of sediments and estimate the grain size.

  9. Characterization technique for inhomogeneous 4H-SiC Schottky contacts: A practical model for high temperature behavior

    NASA Astrophysics Data System (ADS)

    Brezeanu, G.; Pristavu, G.; Draghici, F.; Badila, M.; Pascu, R.

    2017-08-01

    In this paper, a characterization technique for 4H-SiC Schottky diodes with varying levels of metal-semiconductor contact inhomogeneity is proposed. A macro-model, suitable for high-temperature evaluation of SiC Schottky contacts, with discrete barrier height non-uniformity, is introduced in order to determine the temperature interval and bias domain where electrical behavior of the devices can be described by the thermionic emission theory (has a quasi-ideal performance). A minimal set of parameters, the effective barrier height and peff, the non-uniformity factor, is associated. Model-extracted parameters are discussed in comparison with literature-reported results based on existing inhomogeneity approaches, in terms of complexity and physical relevance. Special consideration was given to models based on a Gaussian distribution of barrier heights on the contact surface. The proposed methodology is validated by electrical characterization of nickel silicide Schottky contacts on silicon carbide (4H-SiC), where a discrete barrier distribution can be considered. The same method is applied to inhomogeneous Pt/4H-SiC contacts. The forward characteristics measured at different temperatures are accurately reproduced using this inhomogeneous barrier model. A quasi-ideal behavior is identified for intervals spanning 200 °C for all measured Schottky samples, with Ni and Pt contact metals. A predictable exponential current-voltage variation over at least 2 orders of magnitude is also proven, with a stable barrier height and effective area for temperatures up to 400 °C. This application-oriented characterization technique is confirmed by using model parameters to fit a SiC-Schottky high temperature sensor's response.

  10. Retrieval of land parameters by multi-sensor information using the Earth Observation Land Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Chernetskiy, Maxim; Gobron, Nadine; Gomez-Dans, Jose; Disney, Mathias

    2016-07-01

    Upcoming satellite constellations will substantially increase the amount of Earth Observation (EO) data, and presents us with the challenge of consistently using all these available information to infer the state of the land surface, parameterised through Essential Climate Variables (ECVs). A promising approach to this problem is the use of physically based models that describe the processes that generate the images, using e.g. radiative transfer (RT) theory. However, these models need to be inverted to infer the land surface parameters from the observations, and there is often not enough information in the EO data to satisfactorily achieve this. Data assimilation (DA) approaches supplement the EO data with prior information in the form of models or prior parameter distributions, and have the potential for solving the inversion problem. These methods however are computationally expensive. In this study, we show the use of fast surrogate models of the RT codes (emulators) based on Gaussian Processes (Gomez-Dans et al, 2016) embedded with the Earth Observation Land Data Assimilation System (EO-LDAS) framework (Lewis et al 2012) in order to estimate the surface of the land surface from a heterogeneous set of optical observations. The study uses time series of moderate spatial resolution observations from MODIS (250 m), MERIS (300 m) and MISR (275 m) over one site to infer the temporal evolution of a number of land surface parameters (and associated uncertainties) related to vegetation: leaf area index (LAI), leaf chlorophyll content, etc. These parameter estimates are then used as input to an RT model (semidiscrete or PROSAIL, for example) to calculate fluxes such as broad band albedo or fAPAR. The study demonstrates that blending different sensors in a consistent way using physical models results in a rich and coherent set of land surface parameters retrieved, with quantified uncertainties. The use of RT models also allows for the consistent prediction of fluxes, with a simple mechanism for propagating the uncertainty in the land surface parameters to the flux estimates.

  11. A physical parameter method for the design of broad-band X-ray imaging systems to do coronal plasma diagnostics

    NASA Technical Reports Server (NTRS)

    Kahler, S.; Krieger, A. S.

    1978-01-01

    The technique commonly used for the analysis of data from broad-band X-ray imaging systems for plasma diagnostics is the filter ratio method. This requires the use of two or more broad-band filters to derive temperatures and line-of-sight emission integrals or emission measure distributions as a function of temperature. Here an alternative analytical approach is proposed in which the temperature response of the imaging system is matched to the physical parameter being investigated. The temperature response of a system designed to measure the total radiated power along the line of sight of any coronal structure is calculated. Other examples are discussed.

  12. Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP

    NASA Astrophysics Data System (ADS)

    Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.

    2017-12-01

    The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.

  13. Fiber optic sensors for sub-centimeter spatially resolved measurements: Review and biomedical applications

    NASA Astrophysics Data System (ADS)

    Tosi, Daniele; Schena, Emiliano; Molardi, Carlo; Korganbayev, Sanzhar

    2018-07-01

    One of the current frontier of optical fiber sensors, and a unique asset of this sensing technology is the possibility to use a whole optical fiber, or optical fiber device, as a sensor. This solution allows shifting the whole sensing paradigm, from the measurement of a single physical parameter (such as temperature, strain, vibrations, pressure) to the measurement of a spatial distribution, or profiling, of a physical parameter along the fiber length. In the recent years, several technologies are achieving this task with unprecedentedly narrow spatial resolution, ranging from the sub-millimeter to the centimeter-level. In this work, we review the main fiber optic sensing technologies that achieve a narrow spatial resolution: Fiber Bragg Grating (FBG) dense arrays, chirped FBG (CFBG) sensors, optical frequency domain reflectometry (OFDR) based on either Rayleigh scattering or reflective elements, and microwave photonics (MWP). In the second part of the work, we present the impact of spatially dense fiber optic sensors in biomedical applications, where they find the main impact, presenting the key results obtained in thermo-therapies monitoring, high-resolution diagnostic, catheters monitoring, smart textiles, and other emerging applicative fields.

  14. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  15. Further development and implementation of the DIWA distributed hydrological model-based integrated hydroinformatics system in the Danube River Basin for supporting decision making in water management

    NASA Astrophysics Data System (ADS)

    Szabó, J. A.; Réti, G. Z.; Tóth, T.

    2012-04-01

    Today, the most significant mission of the decision makers on integrated water management issues is to carry out sustainable management for sharing the resources between a variety of users and the environment under conditions of considerable uncertainty (such as climate/land use/population/etc. change) conditions. In light of this increasing water management complexity, we consider that the most pressing needs is to develop and implement up-to-date Spatial Decision Support Systems (SDSS) for aiding decision-making processes to improve water management. One of the most important parts of such an SDSS is a distributed hydrologic model-based integrated hydroinformatics system to analyze the different scenarios. The less successful statistical and/or empirical model-experiments of earlier decades have highlighted the importance of paradigm shift in hydrological modelling approach towards the physically based distributed models, to better describe the complex hydrological processes even on catchments of more ten thousands of square km. Answers to questions like what are the effects of human actions in the catchment area (e. g. forestation or deforestation) or the changing of climate/land use on the flood, drought, or water scarcity, or what is the optimal strategy for planning and/or operating reservoirs, have become increasingly important. Nowadays the answers to this kind of questions can be provided more easily than before. The progress of applied mathematical methods, the advanced state of computer technology as well as the development of remote sensing and meteorological radar technology have accelerated the research capable of answering these questions using well-designed integrated hydroinformatics systems. With most emphasis on the recent years of extensive scientific and computational development HYDROInform UnLtd developed a distributed hydrological model-based integrated hydroinformatics system for supporting the various decisions in water management. Our developed integrated model has two basic pillars: the DIWA (DIstributed WAtershed) hydrologic, and the well-known HEC-RAS hydraulic models. The DIWA is a dynamic water-balance model that distributed both in space and its parameters, and which was developed along combined principles but its mostly based on physical foundations. According to the philosophy of the distributed model approach the catchment is divided into basic elements, cells where the basin characteristics, parameters, physical properties, and the boundary conditions are applied in the centre of the cell, and the cell is supposed to be homogenous between the block boundaries. The neighbouring cells are connected to each other according to runoff hierarchy (local drain direction). Applying the hydrological mass balance and the adequate dynamic equations to these cells, the result is a distributed hydrological model on a continuous, 3D gridded domain. For calculating the water level as well the HEC-RASS hydraulic model has been embedded into DIWA model. In this integration the DIWA model provides the upper boundary conditions for HEC-RAS, and then HEC-RAS provides the water levels along the lowland parts of the river-network. In this presentation, our recently developed integrated hydroinformatics system and its implementation for the middle-upper part of the Danube River Basin will be reported. Following an outline of the backgrounds, an overview on the DIWA and the integrated model-system will be given. The implementation of this integrated hydroinformatics system in the Danube River Basin will also be presented, including a summary of the developed 1km resolution geo-dataset for the modelling. Then some demonstrative results of the use of the pre-calibrated system will be discussed. Finally, an outline of the future steps of the development will be discussed.

  16. MAFIA Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiland, T.; Bartsch, M.; Becker, U.

    1997-02-01

    MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D--3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures. {copyright} {ital 1997 American Institute of Physics.}« less

  17. Investigating the performance of LiDAR-derived biomass information in hydromechanic slope stability modelling

    NASA Astrophysics Data System (ADS)

    Schmaltz, Elmar; Steger, Stefan; Bogaard, Thom; Van Beek, Rens; Glade, Thomas

    2017-04-01

    Hydromechanic slope stability models are often used to assess the landslide susceptibility of hillslopes. Some of these models are able to account for vegetation related effects when assessing slope stability. However, spatial information of required vegetation parameters (especially of woodland) that are defined by land cover type, tree species and stand density are mostly underrepresented compared to hydropedological and geomechanical parameters. The aim of this study is to assess how LiDAR-derived biomass information can help to distinguish distinct tree stand-immanent properties (e.g. stand density and diversity) and further improve the performance of hydromechanic slope stability models. We used spatial vegetation data produced from sophisticated algorithms that are able to separate single trees within a stand based on LiDAR point clouds and thus allow an extraordinary detailed determination of the aboveground biomass. Further, this information is used to estimate the species- and stand-related distribution of the subsurface biomass using an innovative approach to approximate root system architecture and development. The hydrological tree-soil interactions and their impact on the geotechnical stability of the soil mantle are then reproduced in the dynamic and spatially distributed slope stability model STARWARS/PROBSTAB. This study highlights first advances in the approximation of biomechanical reinforcement potential of tree root systems in tree stands. Based on our findings, we address the advantages and limitations of highly detailed biomass information in hydromechanic modelling and physically based slope failure prediction.

  18. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  19. Considerations and Architectures for Inter-Satellite Communications in Distributed Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Edwards, Bernard; Horne, William; Israel, David; Kwadrat, Carl; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    This paper will identify the important characteristics and requirements necessary for inter-satellite communications in distributed spacecraft systems and present analysis results focusing on architectural and protocol comparisons. Emerging spacecraft systems plan to deploy multiple satellites in various "distributed" configurations ranging from close proximity formation flying to widely separated constellations. Distributed spacecraft configurations provide advantages for science exploration and operations since many activities useful for missions may be better served by distributing them between spacecraft. For example, many scientific observations can be enhanced through spatially separated platforms, such as for deep space interferometry. operating multiple distributed spacecraft as a mission requires coordination that may be best provided through inter-satellite communications. For example, several future distributed spacecraft systems envision autonomous operations requiring relative navigational calculations and coordinated attitude and position corrections. To conduct these operations, data must be exchanged between spacecraft. Direct cross-links between satellites provides an efficient and practical method for transferring data and commands. Unlike existing "bent-pipe" relay networks supporting space missions, no standard or widely-used method exists for cross-link communications. Consequently, to support these future missions, the characteristics necessary for inter-satellite communications need to be examined. At first glance, all of the missions look extremely different. Some missions call for tens to hundreds of nano-satellites in constant communications in close proximity to each other. Other missions call for a handful of satellites communicating very slowly over thousands to hundreds of thousands of kilometers. The paper will first classify distributed spacecraft missions to help guide the evaluation and definition of cross-link architectures and approaches. Based on this general classification, the paper will examine general physical layer parameters, such as frequency bands and data rates, necessary to support the missions. The paper will also identify classes of communication architectures that may be employed, ranging from fully distributed to centralized topologies. Numerous factors, such as number of spacecraft, must be evaluated when attempting to pick a communications architecture. Also important is the stability of the formation from a communications standpoint. For example, do all of the spacecraft require equal bandwidth and are spacecraft allowed to enter and leave a formation? The type of science mission being attempted may also heavily influence the communications architecture. In addition, the paper will assess various parameters and characteristics typically associated with the data link layer. The paper will analyze the performance of various multiple access techniques given the operational scenario, requirements, and communication topologies envisioned for missions. This assessment will also include a survey of existing standards and their applicability for distributed spacecraft systems. An important consideration includes the interoperability of the lower layers (physical and data link) examined in this paper with the higher layer protocols(network) envisioned for future space internetworking. Finally, the paper will define a suggested path, including preliminary recommendations, for defining and developing a standard for intersatellite communications based on the classes of distributed spacecraft missions and analysis results.

  20. Modeling of the radiation belt megnetosphere in decisional timeframes

    DOEpatents

    Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.

    2013-04-23

    Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.

  1. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  2. Stream Discharge and Evapotranspiration Responses to Climate Change and Their Associated Uncertainties in a Large Semi-Arid Basin

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2017-12-01

    Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.

  3. Simulation studies of plasma waves in the electron foreshock - The generation of Langmuir waves by a gentle bump-on-tail electron distribution

    NASA Technical Reports Server (NTRS)

    Dum, C. T.

    1990-01-01

    Particle simulation experiments were used to study the basic physical ingredients needed for building a global model of foreshock wave phenomena. In particular, the generation of Langmuir waves by a gentle bump-on-tail electron distribution is analyzed. It is shown that, with appropriately designed simulations experiments, quasi-linear theory can be quantitatively verified for parameters corresponding to the electron foreshock.

  4. Electron-hole pairs generation rate estimation irradiated by isotope Nickel-63 in silicone using GEANT4

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.

    2015-10-01

    To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.

  5. Measurement Comparisons Towards Improving the Understanding of Aerosol-Cloud Processing

    NASA Astrophysics Data System (ADS)

    Noble, Stephen R.

    Cloud processing of aerosol is an aerosol-cloud interaction that is not heavily researched but could have implications on climate. The three types of cloud processing are chemical processing, collision and coalescence processing, and Brownian capture of interstitial particles. All types improve cloud condensation nuclei (CCN) in size or hygroscopicity (kappa). These improved CCN affect subsequent clouds. This dissertation focuses on measurement comparisons to improve our observations and understanding of aerosol-cloud processing. Particle size distributions measured at the continental Southern Great Plains (SGP) site were compared with ground based measurements of cloud fraction (CF) and cloud base altitude (CBA). Particle size distributions were described by a new objective shape parameter to define bimodality rather than an old subjective one. Cloudy conditions at SGP were found to be correlated with lagged shape parameter. Horizontal wind speed and regional CF explained 42%+ of this lag time. Many of these surface particle size distributions were influenced by aerosol-cloud processing. Thus, cloud processing may be more widespread with more implications than previously thought. Particle size distributions measured during two aircraft field campaigns (MArine Stratus/stratocumulus Experiment; MASE; and Ice in Cloud Experiment-Tropical; ICE-T) were compared to CCN distributions. Tuning particle size to critical supersaturation revealed hygroscopicity expressed as ? when the distributions were overlain. Distributions near cumulus clouds (ICE-T) had a higher frequency of the same ?s (48% in ICE-T to 42% in MASE) between the accumulation (processed) and Aitken (unprocessed) modes. This suggested physical processing domination in ICE-T. More MASE (stratus cloud) kappa differences between modes pointed to chemical cloud processing. Chemistry measurements made in MASE showed increases in sulfates and nitrates with distributions that were more processed. This supported chemical cloud processing in MASE. This new method to determine kappa provides the needed information without interrupting ambient measurements. MODIS derived cloud optical thickness (COT), cloud liquid water path (LWP), and cloud effective radius (re) were compared to the same in situ derived variables from cloud probe measurements of two stratus/stratocumulus cloud campaigns (MASE and Physics Of Stratocumulus Tops; POST). In situ data were from complete vertical cloud penetrations, while MODIS data were from pixels along the aircraft penetration path. Comparisons were well correlated except that MODIS LWP (14-36%) and re (20-30%) were biased high. The LWP bias was from re bias and was not improved by using the vertically stratified assumption. MODIS re bias was almost removed when compared to cloud top maximum in situ re, but, that does not describe re for the full depth of the cloud. COT is validated by in situ COT. High correlations suggest that MODIS variables are useful in self-comparisons such as gradient changes in stratus cloud re during aerosol-cloud processing.

  6. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  7. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  8. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 1: A cloud ensemble/radiative parameterization for sensor response (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Raymond, William H.

    1990-01-01

    The physical retrieval of geophysical parameters based upon remotely sensed data requires a sensor response model which relates the upwelling radiances that the sensor observes to the parameters to be retrieved. In the retrieval of precipitation water contents from satellite passive microwave observations, the sensor response model has two basic components. First, a description of the radiative transfer of microwaves through a precipitating atmosphere must be considered, because it is necessary to establish the physical relationship between precipitation water content and upwelling microwave brightness temperature. Also the spatial response of the satellite microwave sensor (or antenna pattern) must be included in the description of sensor response, since precipitation and the associated brightness temperature field can vary over a typical microwave sensor resolution footprint. A 'population' of convective cells, as well as stratiform clouds, are simulated using a computationally-efficient multi-cylinder cloud model. Ensembles of clouds selected at random from the population, distributed over a 25 km x 25 km model domain, serve as the basis for radiative transfer calculations of upwelling brightness temperatures at the SSM/I frequencies. Sensor spatial response is treated explicitly by convolving the upwelling brightness temperature by the domain-integrated SSM/I antenna patterns. The sensor response model is utilized in precipitation water content retrievals.

  9. Mathematical Modeling of Fluid Flow in a Water Physical Model of an Aluminum Degassing Ladle Equipped with an Impeller-Injector

    NASA Astrophysics Data System (ADS)

    Gómez, Eudoxio Ramos; Zenit, Roberto; Rivera, Carlos González; Trápaga, Gerardo; Ramírez-Argáez, Marco A.

    2013-04-01

    In this work, a 3D numerical simulation using a Euler-Euler-based model implemented into a commercial CFD code was used to simulate fluid flow and turbulence structure in a water physical model of an aluminum ladle equipped with an impeller for degassing treatment. The effect of critical process parameters such as rotor speed, gas flow rate, and the point of gas injection (conventional injection through the shaft vs a novel injection through the bottom of the ladle) on the fluid flow and vortex formation was analyzed with this model. The commercial CFD code PHOENICS 3.4 was used to solve all conservation equations governing the process for this two-phase fluid flow system. The mathematical model was reasonably well validated against experimentally measured liquid velocity and vortex sizes in a water physical model built specifically for this investigation. From the results, it was concluded that the angular speed of the impeller is the most important parameter in promoting better stirred baths and creating smaller and better distributed bubbles in the liquid. The pumping effect of the impeller is increased as the impeller rotation speed increases. Gas flow rate is detrimental to bath stirring and diminishes the pumping effect of the impeller. Finally, although the injection point was the least significant variable, it was found that the "novel" injection improves stirring in the ladle.

  10. Modeling and validation of photometric characteristics of space targets oriented to space-based observation.

    PubMed

    Wang, Hongyuan; Zhang, Wei; Dong, Aotuo

    2012-11-10

    A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.

  11. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  12. A convective study of Al2O3-H2O and Cu- H2O nano-liquid films sprayed over a stretching cylinder with viscous dissipation

    NASA Astrophysics Data System (ADS)

    Alshomrani, Ali Saleh; Gul, Taza

    2017-11-01

    This study is related with the analysis of spray distribution considering a nanofluid thin layer over the slippery and stretching surface of a cylinder with thermal radiation. The distribution of the spray rate is designated as a function of the nanolayer thickness. The applied temperature used during spray phenomenon has been assumed as a reference temperature with the addition of the viscous dissipation term. The diverse behavior of the thermal radiation with magnetic and chemical reaction has been cautiously observed, which has consequences in causing variations in the spray distribution and heat transmission. Nanofluids have been used as water-based like Al2O3-H2O, Cu- H2O and have been examined under the consideration of momentum and thermal slip boundary conditions. The basic equations have been transformed into a set of nonlinear equations by using suitable variables for alteration. The approximate results of the problem have been achieved by using the optimal approach of the Homotopy Analysis Method (HAM). We demonstrate our results with the help of the numerical (ND-Solve) method. In addition, we found a close agreement of the two methods which is confirmed through graphs and tables. The rate of the spray pattern under the applied pressure term has also been obtained. The maximum cooling performance has been obtained by using the Cu water with the small values of the magnetic parameter and alumina for large values of the magnetic parameter. The outcomes of the Cu-water and Al2O3-H2O nanofluids have been linked to the published results in the literature. The impact of the physical parameters, like the skin friction coefficient, and the local Nusselt number have also been observed and compared with the published work. The momentum slip and thermal slip parameters, thermal radiation parameter, magnetic parameter and heat generation/absorption parameter effects on the spray rate have been calculated and discussed.

  13. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  14. Structural Physics of Bee Honeycomb

    NASA Astrophysics Data System (ADS)

    Kaatz, Forrest; Bultheel, Adhemar; Egami, Takeshi

    2008-03-01

    Honeybee combs have aroused interest in the ability of honeybees to form regular hexagonal geometric constructs since ancient times. Here we use a real space technique based on the pair distribution function (PDF) and radial distribution function (RDF), and a reciprocal space method utilizing the Debye-Waller Factor (DWF) to quantify the order for a range of honeycombs made by Apis mellifera. The PDFs and RDFs are fit with a series of Gaussian curves. We characterize the order in the honeycomb using a real space order parameter, OP3, to describe the order in the combs and a two-dimensional Fourier transform from which a Debye-Waller order parameter, u, is derived. Both OP3 and u take values from [0, 1] where the value one represents perfect order. The analyzed combs have values of OP3 from 0.33 to 0.60 and values of u from 0.83 to 0.98. RDF fits of honeycomb histograms show that naturally made comb can be crystalline in a 2D ordered structural sense, yet is more `liquid-like' than cells made on `foundation' wax. We show that with the assistance of man-made foundation wax, honeybees can manufacture highly ordered arrays of hexagonal cells.

  15. A gridded global description of the ionosphere and thermosphere for 1996 - 2000

    NASA Astrophysics Data System (ADS)

    Ridley, A.; Kihn, E.; Kroehl, H.

    The modeling and simulation community has asked for a realistic representation of the near-Earth space environment covering a significant number of years to be used in scientific and engineering applications. The data, data management systems, assimilation techniques, physical models, and computer resources are now available to construct a realistic description of the ionosphere and thermosphere over a 5 year period. DMSP and NOAA POES satellite data and solar emissions were used to compute Hall and Pederson conductances in the ionosphere. Interplanetary magnetic field measurements on the ACE satellite define average electrostatic potential patterns over the northern and southern Polar Regions. These conductances, electric field patterns, and ground-based magnetometer data were input to the Assimilative Mapping of Ionospheric Electrodynamics model to compute the distribution of electric fields and currents in the ionosphere. The Global Thermosphere Ionosphere Model (GITM) used the ionospheric electrodynamic parameters to compute the distribution of particles and fields in the ionosphere and thermosphere. GITM uses a general circulation approach to solve the fundamental equations. Model results offer a unique opportunity to assess the relative importance of different forcing terms under a variety of conditions as well as the accuracies of different estimates of ionospheric electrodynamic parameters.

  16. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  17. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  18. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  19. PRMS-IV, the precipitation-runoff modeling system, version 4

    USGS Publications Warehouse

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  20. Global regionalized seismicity in view of Non-Extensive Statistical Physics

    NASA Astrophysics Data System (ADS)

    Chochlaki, Kalliopi; Vallianatos, Filippos; Michas, Georgios

    2018-03-01

    In the present work we study the distribution of Earth's shallow seismicity on different seismic zones, as occurred from 1981 to 2011 and extracted from the Centroid Moment Tensor (CMT) catalog. Our analysis is based on the subdivision of the Earth's surface into seismic zones that are homogeneous with regards to seismic activity and orientation of the predominant stress field. For this, we use the Flinn-Engdahl regionalization (FE) (Flinn and Engdahl, 1965), which consists of fifty seismic zones as modified by Lombardi and Marzocchi (2007). The latter authors grouped the 50 FE zones into larger tectonically homogeneous ones, utilizing the cumulative moment tensor method, resulting into thirty-nine seismic zones. In each one of these seismic zones we study the distribution of seismicity in terms of the frequency-magnitude distribution and the inter-event time distribution between successive earthquakes, a task that is essential for hazard assessments and to better understand the global and regional geodynamics. In our analysis we use non-extensive statistical physics (NESP), which seems to be one of the most adequate and promising methodological tools for analyzing complex systems, such as the Earth's seismicity, introducing the q-exponential formulation as the expression of probability distribution function that maximizes the Sq entropy as defined by Tsallis, (1988). The qE parameter is significantly greater than one for all the seismic regions analyzed with value range from 1.294 to 1.504, indicating that magnitude correlations are particularly strong. Furthermore, the qT parameter shows some temporal correlations but variations with cut-off magnitude show greater temporal correlations when the smaller magnitude earthquakes are included. The qT for earthquakes with magnitude greater than 5 takes values from 1.043 to 1.353 and as we increase the cut-off magnitude to 5.5 and 6 the qT value ranges from 1.001 to 1.242 and from 1.001 to 1.181 respectively, presenting a significant decrease. Our findings support the ideas of universality within the Tsallis approach to describe Earth's seismicity and present strong evidence ontemporal clustering and long-range correlations of seismicity in each of the tectonic zonesanalyzed.

  1. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  2. Land use and urban morphology parameters for Vienna required for initialisation of the urban canopy model TEB derived via the concept of "local climate zones"

    NASA Astrophysics Data System (ADS)

    Trimmel, Heidelinde; Weihs, Philipp; Oswald, Sandro M.; Masson, Valéry; Schoetter, Robert

    2017-04-01

    Urban settlements are generally known for their high fractions of impermeable surfaces, large heat capacity and low humidity compared to rural areas which results in the well known phenomena of urban heat islands. The urbanized areas are growing which can amplify the intensity and frequency of situations with heat stress. The distribution of the urban heat island is not uniform though, because the urban environment is highly diverse regarding its morphology as building heights, building contiguity and configuration of open spaces and trees vary, which cause changes in the aerodynamic resistance for heat transfers and drag coefficients for momentum. Furthermore cities are characterized by highly variable physical surface properties as albedo, emissivity, heat capacity and thermal conductivity. The distribution of the urban heat island is influenced by these morphological and physical parameters as well as the distribution of unsealed soil and vegetation. These aspects influence the urban climate on micro- and mesoscale. For larger Vienna high resolution vector and raster geodatasets were processed to derive land use surface fractions and building morphology parameters on block scale following the methodology of Cordeau (2016). A dataset of building age and typology was cross checked and extended using satellite visual and thermal bands and linked to a database joining building age and typology with typical physical building parameters obtained from different studies (Berger et al. 2012 and Amtmann M and Altmann-Mavaddat N (2014)) and the OIB (Österreichisches Institut für Bautechnik). Using dominant parameters obtained using this high resolution mainly ground based data sets (building height, built area fraction, unsealed fraction, sky view factor) a local climate zone classification was produced using an algorithm. The threshold values were chosen according to Stewart and Oke (2012). This approach is compared to results obtained with the methodology of Bechtel et al. (2015) which is based on machine learning algorithms depending on satellite imagery and expert knowledge. The data on urban land use and morphology are used for initialisation of the town energy balance scheme TEB, but are also useful for other urban canopy models or studies related to urban planning or modelling of the urban system. The sensitivity of canyon air and surface temperatures, air specific humidity and horizontal wind simulated by the town energy balance scheme TEB (Masson, 2000) regarding the dominant parameters within the range determined for the present urban structure of Vienna and the expected changes (MA 18 (2011, 2014a+b), PGO (2011), Amtmann M and Altmann-Mavaddat N (2014)) was calculated for different land cover zones. While the buildings heights have a standard deviation of 3.2m which is 15% of the maximum average building height of one block the built and unsealed surface fraction vary stronger with around 30% standard deviation. The pre 1919 structure of Vienna is rather uniform and easier to describe, the later building structure is more diverse regarding morphological as well as physical building parameters. Therefore largest uncertainties are possible at the urban rims where also the highest development is expected. The analysis will be focused on these areas. Amtmann M and Altmann-Mavaddat N (2014) Eine Typology österreichischer Wohngebäude, Österreichische Energieargentur - Austrian Energy Agency, TABULA/EPISCOPE Bechtel B, Alexander P, Böhner J, et al (2015) Mapping Local Climate Zones for a Worldwide Database of the Form and Function of Cities. ISPRS Int J Geo-Inf 4:199-219. doi: 10.3390/ijgi4010199 Berger T, Formayer H, Smutny R, Neururer C, Passawa R (2012) Auswirkungen des Klimawandelsauf den thermischen Komfort in Bürogebäuden, Berichte aus Energie- und Umweltforschung Cordeau E / Les îlots morphologiques urbains (IMU) / IAU îdF / 2016 Magistratsabteilung 18 - Stadtentwicklung und Stadtplanung, Wien - MA 18 (2011) Siedlungsformen für die Stadterweiterung, MA 18 (2014a) Smart City Wien - Rahmenstrategie MA 18 (2014b) Stadtentwicklungsplan STEP 2025, www.step.wien.at Masson V (2000) A physically-based scheme for the urban energy budget in atmospheric models. Bound-Layer Meteorol 94:357-397. doi: 10.1023/A:1002463829265 PGO (Planungsgemeinschaft Ost) (2011) stadtregion +, Planungskooperation zur räumlichen Entwicklung der Stadtregion Wien Niederösterreich Burgenland. Stewart ID, Oke TR (2012) Local climate zones for urban temperature studies. Bull Am Meteorol Soc 93:1879-1900.

  3. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    PubMed

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  4. An uncertainty model of acoustic metamaterials with random parameters

    NASA Astrophysics Data System (ADS)

    He, Z. C.; Hu, J. Y.; Li, Eric

    2018-01-01

    Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.

  5. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  6. Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells

    DOE PAGES

    Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun; ...

    2017-12-15

    To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less

  7. StatisticAl Characteristics of Cloud over Beijing, China Obtained FRom Ka band Doppler Radar Observation

    NASA Astrophysics Data System (ADS)

    LIU, J.; Bi, Y.; Duan, S.; Lu, D.

    2017-12-01

    It is well-known that cloud characteristics, such as top and base heights and their layering structure of micro-physical parameters, spatial coverage and temporal duration are very important factors influencing both radiation budget and its vertical partitioning as well as hydrological cycle through precipitation data. Also, cloud structure and their statistical distribution and typical values will have respective characteristics with geographical and seasonal variation. Ka band radar is a powerful tool to obtain above parameters around the world, such as ARM cloud radar at the Oklahoma US, Since 2006, Cloudsat is one of NASA's A-Train satellite constellation, continuously observe the cloud structure with global coverage, but only twice a day it monitor clouds over same local site at same local time.By using IAP Ka band Doppler radar which has been operating continuously since early 2013 over the roof of IAP building in Beijing, we obtained the statistical characteristic of clouds, including cloud layering, cloud top and base heights, as well as the thickness of each cloud layer and their distribution, and were analyzed monthly and seasonal and diurnal variation, statistical analysis of cloud reflectivity profiles is also made. The analysis covers both non-precipitating clouds and precipitating clouds. Also, some preliminary comparison of the results with Cloudsat/Calipso products for same period and same area are made.

  8. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  9. Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun

    To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less

  10. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  11. Recovering the Physical Properties of Molecular Gas in Galaxies from CO SLED Modeling

    NASA Astrophysics Data System (ADS)

    Kamenetzky, J.; Privon, G. C.; Narayanan, D.

    2018-05-01

    Modeling of the spectral line energy distribution (SLED) of the CO molecule can reveal the physical conditions (temperature and density) of molecular gas in Galactic clouds and other galaxies. Recently, the Herschel Space Observatory and ALMA have offered, for the first time, a comprehensive view of the rotational J = 4‑3 through J = 13‑12 lines, which arise from a complex, diverse range of physical conditions that must be simplified to one, two, or three components when modeled. Here we investigate the recoverability of physical conditions from SLEDs produced by galaxy evolution simulations containing a large dynamical range in physical properties. These simulated SLEDs were generally fit well by one component of gas whose properties largely resemble or slightly underestimate the luminosity-weighted properties of the simulations when clumping due to nonthermal velocity dispersion is taken into account. If only modeling the first three rotational lines, the median values of the marginalized parameter distributions better represent the luminosity-weighted properties of the simulations, but the uncertainties in the fitted parameters are nearly an order of magnitude, compared to approximately 0.2 dex in the “best-case” scenario of a fully sampled SLED through J = 10‑9. This study demonstrates that while common CO SLED modeling techniques cannot reveal the underlying complexities of the molecular gas, they can distinguish bulk luminosity-weighted properties that vary with star formation surface densities and galaxy evolution, if a sufficient number of lines are detected and modeled.

  12. Predictive Simulation of Gas Adsorption in Fixed-Beds and Limitations due to the Ill-Posed Danckwerts Boundary Condition

    NASA Technical Reports Server (NTRS)

    Knox, James Clinton

    2016-01-01

    The 1-D axially dispersed plug flow model is a mathematical model widely used for the simulation of adsorption processes. Lumped mass transfer coefficients such as the Glueckauf linear driving force (LDF) term and the axial dispersion coefficient are generally obtained by fitting simulation results to the experimental breakthrough test data. An approach is introduced where these parameters, along with the only free parameter in the energy balance equations, are individually fit to specific test data that isolates the appropriate physics. It is shown that with this approach this model provides excellent simulation results for the C02 on zeolite SA sorbent/sorbate system; however, for the H20 on zeolite SA system, non-physical deviations from constant pattern behavior occur when fitting dispersive experimental results with a large axial dispersion coefficient. A method has also been developed that determines a priori what values of the LDF and axial dispersion terms will result in non-physical simulation results for a specific sorbent/sorbate system when using the one-dimensional axially dispersed plug flow model. A relationship between the steepness of the adsorption equilibrium isotherm as indicated by the distribution factor, the magnitude of the axial dispersion and mass transfer coefficient, and the resulting non-physical behavior is derived. This relationship is intended to provide a guide for avoiding non-physical behavior by limiting the magnitude of the axial dispersion term on the basis of the mass transfer coefficient and distribution factor.

  13. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  14. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  15. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  16. Banding of NMR-derived Methyl Order Parameters: Implications for Protein Dynamics

    PubMed Central

    Sharp, Kim A.; Kasinath, Vignesh; Wand, A. Joshua

    2014-01-01

    Our understanding of protein folding, stability and function has begun to more explicitly incorporate dynamical aspects. Nuclear magnetic resonance has emerged as a powerful experimental method for obtaining comprehensive site-resolved insight into protein motion. It has been observed that methyl-group motion tends to cluster into three “classes” when expressed in terms of the popular Lipari-Szabo model-free squared generalized order parameter. Here the origins of the three classes or bands in the distribution of order parameters are examined. As a first step, a Bayesian based approach, which makes no a priori assumption about the existence or number of bands, is developed to detect the banding of O2axis values derived either from NMR experiments or molecular dynamics simulations. The analysis is applied to seven proteins with extensive molecular dynamics simulations of these proteins in explicit water to examine the relationship between O2 and fine details of the motion of methyl bearing side chains. All of the proteins studied display banding, with some subtle differences. We propose a very simple yet plausible physical mechanism for banding. Finally, our Bayesian method is used to analyze the measured distributions of methyl group motions in the catabolite activating protein and several of its mutants in various liganded states and discuss the functional implications of the observed banding to protein dynamics and function. PMID:24677353

  17. Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison

    NASA Astrophysics Data System (ADS)

    Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team

    2016-09-01

    Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.

  18. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  19. Synoptic thermal and oceanographic parameter distributions in the New York Bight Apex

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.; Bahn, G. S.; Thomas, J. P.

    1981-01-01

    Concurrent surface water measurements made from a moving oceanographic research vessel were used to calibrate and interpret remotely sensed data collected over a plume in the New York Bight Apex on 23 June 1977. Multiple regression techniques were used to develop equations to map synoptic distributions of chlorophyll a and total suspended matter in the remotely sensed scene. Thermal (which did not have surface calibration values) and water quality parameter distributions indicated a cold mass of water in the Bight Apex with an overflowing nutrient-rich warm water plume that originated in the Sandy Hook Bay and flowed south near the New Jersey shoreline. Data analysis indicates that remotely sensed data may be particularly useful for studying physical and biological processes in the top several metres of surface water at plume boundaries.

  20. Top-quark pairs at high invariant mass: a model-independent discriminator of new physics at the Large Hadron Collider.

    PubMed

    Barger, Vernon; Han, Tao; Walker, Devin G E

    2008-01-25

    We study top-quark pair production to probe new physics at the CERN Large Hadron Collider. We propose reconstruction methods for tt[over] semileptonic events and use them to reconstruct the tt[over] invariant mass. The angular distribution of top quarks in their c.m. frame can determine the spin and production subprocess for each new physics resonance. Forward-backward asymmetry and CP-odd variables can be constructed to further delineate the nature of new physics. We parametrize the new resonances with a few generic parameters and show high invariant mass top pair production may provide an early indicator for new physics beyond the standard model.

  1. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  2. New Geophysical Technique for Mineral Exploration and Mineral Discrimination Based on Electromagnetic Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael S. Zhdanov

    2005-03-09

    The research during the first year of the project was focused on developing the foundations of a new geophysical technique for mineral exploration and mineral discrimination, based on electromagnetic (EM) methods. The proposed new technique is based on examining the spectral induced polarization effects in electromagnetic data using modern distributed acquisition systems and advanced methods of 3-D inversion. The analysis of IP phenomena is usually based on models with frequency dependent complex conductivity distribution. One of the most popular is the Cole-Cole relaxation model. In this progress report we have constructed and analyzed a different physical and mathematical model ofmore » the IP effect based on the effective-medium theory. We have developed a rigorous mathematical model of multi-phase conductive media, which can provide a quantitative tool for evaluation of the type of mineralization, using the conductivity relaxation model parameters. The parameters of the new conductivity relaxation model can be used for discrimination of the different types of rock formations, which is an important goal in mineral exploration. The solution of this problem requires development of an effective numerical method for EM forward modeling in 3-D inhomogeneous media. During the first year of the project we have developed a prototype 3-D IP modeling algorithm using the integral equation (IP) method. Our IE forward modeling code INTEM3DIP is based on the contraction IE method, which improves the convergence rate of the iterative solvers. This code can handle various types of sources and receivers to compute the effect of a complex resistivity model. We have tested the working version of the INTEM3DIP code for computer simulation of the IP data for several models including a southwest US porphyry model and a Kambalda-style nickel sulfide deposit. The numerical modeling study clearly demonstrates how the various complex resistivity models manifest differently in the observed EM data. These modeling studies lay a background for future development of the IP inversion method, directed at determining the electrical conductivity and the intrinsic chargeability distributions, as well as the other parameters of the relaxation model simultaneously. The new technology envisioned in this proposal, will be used for the discrimination of different rocks, and in this way will provide an ability to distinguish between uneconomic mineral deposits and the location of zones of economic mineralization and geothermal resources.« less

  3. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  4. Viticulture microzoning: a functional approach aiming to grape and wine qualities

    NASA Astrophysics Data System (ADS)

    Bonfante, A.; Agrillo, A.; Albrizio, R.; Basile, A.; Buonomo, R.; De Mascellis, R.; Gambuti, A.; Giorio, P.; Guida, G.; Langella, G.; Manna, P.; Minieri, L.; Moio, L.; Siani, T.; Terribile, F.

    2014-12-01

    This paper aims to test a new physically oriented approach to viticulture zoning at the farm scale, strongly rooted on hydropedology and aiming to achieve a better use of environmental features with respect to plant requirement and wine production. The physics of our approach is defined by the use of soil-plant-atmosphere simulation models which applies physically-based equations to describe the soil hydrological processes and solves soil-plant water status. This study (ZOVISA project) was conducted in a farm devoted to high quality wines production (Aglianico DOC), located in South Italy (Campania region, Mirabella Eclano-AV). The soil spatial distribution was obtained after standard soil survey informed by geophysical survey. Two Homogenous Zones (HZs) were identified; in each one of those a physically based model was applied to solve the soil water balance and estimate the soil functional behaviour (crop water stress index, CWSI) defining the functional Homogeneous Zones (fHzs). In these last, experimental plots were established and monitored for investigating soil-plant water status, crop development (biometric and physiological parameters) and daily climate variables (temperature, solar radiation, rainfall, wind). The effects of crop water status on crop response over must and wine quality were then evaluated in the fHZs. This was performed by comparing crop water stress with (i) crop physiological measurement (leaf gas exchange, chlorophyll a fluorescence, leaf water potential, chlorophyll content, LAI measurement), (ii) grape bunches measurements (berry weight, sugar content, titratable acidity, etc.) and (iii) wine quality (aromatic response). Eventually this experiment has proved the usefulness of the physical based approach also in the case of mapping viticulture microzoning.

  5. Simulated Seasonal Spatio-Temporal Patterns of Soil Moisture, Temperature, and Net Radiation in a Deciduous Forest

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Howington, Stacy E.; Cinnella, Pasquale; Smith, James A.

    2011-01-01

    The temperature and moisture regimes in a forest are key components in the forest ecosystem dynamics. Observations and studies indicate that the internal temperature distribution and moisture content of the tree influence not only growth and development, but onset and cessation of cambial activity [1], resistance to insect predation[2], and even affect the population dynamics of the insects [3]. Moreover, temperature directly affects the uptake and metabolism of population from the soil into the tree tissue [4]. Additional studies show that soil and atmospheric temperatures are significant parameters that limit the growth of trees and impose treeline elevation limitation [5]. Directional thermal infrared radiance effects have long been observed in natural backgrounds [6]. In earlier work, we illustrated the use of physically-based models to simulate directional effects in thermal imaging [7-8]. In this paper, we illustrated the use of physically-based models to simulate directional effects in thermal, and net radiation in a adeciduous forest using our recently developed three-dimensional, macro-scale computational tool that simulates the heat and mass transfer interaction in a soil-root-stem systems (SRSS). The SRSS model includes the coupling of existing heat and mass transport tools to stimulate the diurnal internal and external temperatures, internal fluid flow and moisture distribution, and heat flow in the system.

  6. Quantitative groundwater modelling for a sustainable water resource exploitation in a Mediterranean alluvial aquifer

    NASA Astrophysics Data System (ADS)

    Laïssaoui, Mounir; Mesbah, Mohamed; Madani, Khodir; Kiniouar, Hocine

    2018-05-01

    To analyze the water budget under human influences in the Isser wadi alluvial aquifer in the northeast of Algeria, we built a mathematical model which can be used for better managing groundwater exploitation. A modular three-dimensional finite-difference groundwater flow model (MODFLOW) was used. The modelling system is largely based on physical laws and employs a numerical method of the finite difference to simulate water movement and fluxes in a horizontally discretized field. After calibration in steady-state, the model could reproduce the initial heads with a rather good precision. It enabled us to quantify the aquifer water balance terms and to obtain a conductivity zones distribution. The model also highlighted the relevant role of the Isser wadi which constitutes a drain of great importance for the aquifer, ensuring alone almost all outflows. The scenarios suggested in transient simulations showed that an increase in the pumping would only increase the lowering of the groundwater levels and disrupting natural balance of aquifer. However, it is clear that this situation depends primarily on the position of pumping wells in the plain as well as on the extracted volumes of water. As proven by the promising results of model, this physically based and distributed-parameter model is a valuable contribution to the ever-advancing technology of hydrological modelling and water resources assessment.

  7. Time scale variations of the CIV resonance lines in HD 24534

    NASA Astrophysics Data System (ADS)

    Tsatsi, A.

    2012-01-01

    Many lines in the spectra of hot emission stars (Be and Oe) present peculiar and very complex profiles. As a result we can not find a classical theoretical distribution to fit these physical profiles; hence many physical parameters of the regions where these lines are created are difficult to be determined. In this paper, we adopt the Gauss-Rotation model (GR-model), that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs). The model is applied for CIV (λλ 1548.187, 1550.772 A) resonance lines in the spectra of HD 24534 (X Persei), taken by I.U.E. at three different periods. From this analysis we can calculate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM) and the absorbed energy of the independent regions of matter which produce the main and the satellite components of the studied spectral lines. Finally, we calculate the time scale variation of the above physical parameters.

  8. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  9. Virtual design and optimization studies for industrial silicon microphones applying tailored system-level modeling

    NASA Astrophysics Data System (ADS)

    Kuenzig, Thomas; Dehé, Alfons; Krumbein, Ulrich; Schrag, Gabriele

    2018-05-01

    Maxing out the technological limits in order to satisfy the customers’ demands and obtain the best performance of micro-devices and-systems is a challenge of today’s manufacturers. Dedicated system simulation is key to investigate the potential of device and system concepts in order to identify the best design w.r.t. the given requirements. We present a tailored, physics-based system-level modeling approach combining lumped with distributed models that provides detailed insight into the device and system operation at low computational expense. The resulting transparent, scalable (i.e. reusable) and modularly composed models explicitly contain the physical dependency on all relevant parameters, thus being well suited for dedicated investigation and optimization of MEMS devices and systems. This is demonstrated for an industrial capacitive silicon microphone. The performance of such microphones is determined by distributed effects like viscous damping and inhomogeneous capacitance variation across the membrane as well as by system-level phenomena like package-induced acoustic effects and the impact of the electronic circuitry for biasing and read-out. The here presented model covers all relevant figures of merit and, thus, enables to evaluate the optimization potential of silicon microphones towards high fidelity applications. This work was carried out at the Technical University of Munich, Chair for Physics of Electrotechnology. Thomas Kuenzig is now with Infineon Technologies AG, Neubiberg.

  10. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  11. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagne, MC; Archambault, L; CHU de Quebec, Quebec, Quebec

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometricmore » parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.« less

  12. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  13. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  14. Bluetooth-based distributed measurement system

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng

    2007-07-01

    A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.

  15. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  16. Mapping ground water vulnerability to pesticide leaching with a process-based metamodel of EuroPEARL.

    PubMed

    Tiktak, A; Boesten, J J T I; van der Linden, A M A; Vanclooster, M

    2006-01-01

    To support EU policy, indicators of pesticide leaching at the European level are required. For this reason, a metamodel of the spatially distributed European pesticide leaching model EuroPEARL was developed. EuroPEARL considers transient flow and solute transport and assumes Freundlich adsorption, first-order degradation and passive plant uptake of pesticides. Physical parameters are depth dependent while (bio)-chemical parameters are depth, temperature, and moisture dependent. The metamodel is based on an analytical expression that describes the mass fraction of pesticide leached. The metamodel ignores vertical parameter variations and assumes steady flow. The calibration dataset was generated with EuroPEARL and consisted of approximately 60,000 simulations done for 56 pesticides with different half-lives and partitioning coefficients. The target variable was the 80th percentile of the annual average leaching concentration at 1-m depth from a time series of 20 yr. The metamodel explains over 90% of the variation of the original model with only four independent spatial attributes. These parameters are available in European soil and climate databases, so that the calibrated metamodel could be applied to generate maps of the predicted leaching concentration in the European Union. Maps generated with the metamodel showed a good similarity with the maps obtained with EuroPEARL, which was confirmed by means of quantitative performance indicators.

  17. Discrete element method as an approach to model the wheat milling process

    USDA-ARS?s Scientific Manuscript database

    It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...

  18. Superstatistics with different kinds of distributions in the deformed formalism

    NASA Astrophysics Data System (ADS)

    Sargolzaeipor, S.; Hassanabadi, H.; Chung, W. S.

    2018-03-01

    In this article, after first introducing superstatistics, the effective Boltzmann factor in a deformed formalism for modified Dirac delta, uniform, two-level and Gamma distributions is derived. Then we make use of the superstatistics for four important problems in physics and the thermodynamic properties of the system are calculated. All results in the limit case are reduced to ordinary statistical mechanics. Furthermore, effects of all parameters in the problems are calculated and shown graphically.

  19. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  20. A geometric theory for Lévy distributions

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2014-08-01

    Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.

  1. A geometric theory for Lévy distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2014-08-15

    Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts ofmore » the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.« less

  2. Continuum-based DFN-consistent numerical framework for the simulation of oxygen infiltration into fractured crystalline rocks.

    PubMed

    Trinchero, Paolo; Puigdomenech, Ignasi; Molinero, Jorge; Ebrahimi, Hedieh; Gylling, Björn; Svensson, Urban; Bosbach, Dirk; Deissmann, Guido

    2017-05-01

    We present an enhanced continuum-based approach for the modelling of groundwater flow coupled with reactive transport in crystalline fractured rocks. In the proposed formulation, flow, transport and geochemical parameters are represented onto a numerical grid using Discrete Fracture Network (DFN) derived parameters. The geochemical reactions are further constrained by field observations of mineral distribution. To illustrate how the approach can be used to include physical and geochemical complexities into reactive transport calculations, we have analysed the potential ingress of oxygenated glacial-meltwater in a heterogeneous fractured rock using the Forsmark site (Sweden) as an example. The results of high-performance reactive transport calculations show that, after a quick oxygen penetration, steady state conditions are attained where abiotic reactions (i.e. the dissolution of chlorite and the homogeneous oxidation of aqueous iron(II) ions) counterbalance advective oxygen fluxes. The results show that most of the chlorite becomes depleted in the highly conductive deformation zones where higher mineral surface areas are available for reactions. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Anisotropy of the angular distribution of fission fragments in heavy-ion fusion-fission reactions: The influence of the level-density parameter and the neck thickness

    NASA Astrophysics Data System (ADS)

    Naderi, D.; Pahlavani, M. R.; Alavi, S. A.

    2013-05-01

    Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.

  4. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  5. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  6. Imposition of physical parameters in dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Mai-Duy, N.; Phan-Thien, N.; Tran-Cong, T.

    2017-12-01

    In the mesoscale simulations by the dissipative particle dynamics (DPD), the motion of a fluid is modelled by a set of particles interacting in a pairwise manner, and it has been shown to be governed by the Navier-Stokes equation, with its physical properties, such as viscosity, Schmidt number, isothermal compressibility, relaxation and inertia time scales, in fact its whole rheology resulted from the choice of the DPD model parameters. In this work, we will explore the response of a DPD fluid with respect to its parameter space, where the model input parameters can be chosen in advance so that (i) the ratio between the relaxation and inertia time scales is fixed; (ii) the isothermal compressibility of water at room temperature is enforced; and (iii) the viscosity and Schmidt number can be specified as inputs. These impositions are possible with some extra degrees of freedom in the weighting functions for the conservative and dissipative forces. Numerical experiments show an improvement in the solution quality over conventional DPD parameters/weighting functions, particularly for the number density distribution and computed stresses.

  7. Distributed traffic signal control using fuzzy logic

    NASA Technical Reports Server (NTRS)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  8. Double stratified radiative Jeffery magneto nanofluid flow along an inclined stretched cylinder with chemical reaction and slip condition

    NASA Astrophysics Data System (ADS)

    Ramzan, M.; Gul, Hina; Dong Chung, Jae

    2017-11-01

    A mathematical model is designed to deliberate the flow of an MHD Jeffery nanofluid past a vertically inclined stretched cylinder near a stagnation point. The flow analysis is performed in attendance of thermal radiation, mixed convection and chemical reaction. Influence of thermal and solutal stratification with slip boundary condition is also considered. Apposite transformations are engaged to convert the nonlinear partial differential equations to differential equations with high nonlinearity. Convergent series solutions of the problem are established via the renowned Homotopy Analysis Method (HAM). Graphical illustrations are plotted to depict the effects of prominent arising parameters against all involved distributions. Numerically erected tables of important physical parameters like Skin friction, Nusselt and Sherwood numbers are also give. Comparative studies (with a previously examined work) are also included to endorse our results. It is noticed that the thermal stratification parameter has diminishing effect on temperature distribution. Moreover, the velocity field is a snowballing and declining function of curvature and slip parameters respectively.

  9. New Educational Modules Using a Cyber-Distribution System Testbed

    DOE PAGES

    Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching; ...

    2018-03-30

    At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less

  10. New Educational Modules Using a Cyber-Distribution System Testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching

    At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less

  11. Study of energy partitioning using a set of related explosive formulations

    NASA Astrophysics Data System (ADS)

    Lieber, Mark; Foster, Joseph C.; Stewart, D. Scott

    2012-03-01

    Condensed phase high explosives convert potential energy stored in the electro-magnetic field structure of complex molecules to high power output during the detonation process. Historically, the explosive design problem has focused on intramolecular energy storage. The molecules of interest are derived via molecular synthesis providing near stoichiometric balance on the physical scale of the molecule. This approach provides prompt reactions based on transport physics at the molecular scale. Modern material design has evolved to approaches that employ intermolecular ingredients to alter the spatial and temporal distribution of energy release. State of the art continuum methods have been used to study this approach to the materials design. Cheetah has been used to produce data for a set of fictitious explosive formulations based on C-4 to study the partitioning of the available energy between internal and kinetic energy in the detonation. The equation of state information from Cheetah has been used in ALE3D to develop an understanding of the relationship between variations in the formulation parameters and the internal energy cycle in the products.

  12. Physical Kinetics of Electrons in a High-Voltage Pulsed High-Pressure Discharge with Cylindrical Geometry

    NASA Astrophysics Data System (ADS)

    Kozhevnikov, V. Yu.; Kozyrev, A. V.; Semeniuk, N. S.

    2017-12-01

    Results of theoretical modeling of the phenomenon of a high-voltage discharge in nitrogen at atmospheric pressure are presented, based on a consistent kinetic theory of the electrons. A mathematical model of a nonstationary high-pressure discharge has been constructed for the first time, based on a description of the electron component from first principles. The physical kinetics of the electrons are described with the help of the Boltzmann kinematic equation for the electron distribution function over momenta with only ionization and elastic collisions taken into account. A detailed spatiotemporal picture of a nonstationary discharge with runaway electrons under conditions of coaxial geometry of the gas diode is presented. The model describes in a self-consistent way both the process of formation of the runaway electron flux in the discharge and the influence of this flux on the rate of ionization processes in the gas. Total energy spectra of the electron flux incident on the anode are calculated. The obtained parameters of the current pulse of the beam of fast electrons correlate well with the known experimental data.

  13. Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots

    NASA Astrophysics Data System (ADS)

    Chellal, R.; Cuvillon, L.; Laroche, E.

    2017-04-01

    This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.

  14. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  15. Development of a transient, lumped hydrologic model for geomorphologic units in a geomorphology based rainfall-runoff modelling framework

    NASA Astrophysics Data System (ADS)

    Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.

    2010-05-01

    We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.

  16. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  17. Study of Carrying Capacity Assesment for Natural Fisheries in Jatibarang Reservoir In Semarang City

    NASA Astrophysics Data System (ADS)

    Sujono, Bambang; Anggoro, Sutrisno

    2018-02-01

    Jatibarang reservoir serves as water supply in dry season and controlling flood in Semarang City. This reservoir is stem Kreo River which cathment areas of 54 km2, pool of area 110 ha and volume is 20 billion m3. This reservoir is potential to develop as natural fisheries area. The goals of this research were to explore existing condition of physical, biological as well as chemical parameter; carrying capacity assessment for natural fisheries; determining appropriate fish species to be developed in Jatibarang reservoir. This research was done in descriptive explorative scheme. Field survey and laboratory analyses were conducted to identify physical, chemical and biological parameters of the water. Physical parameters measured were temperature and water brightness. Chemical parameters measured were pH, DO, phosphate, Ammonia, nitrites and nitrate, while biological parameter measured were chlorophyll-a concentration. Carrying capacity analyses was done referred to the Government Regulation Number 82, 2001 that regulate the management of water quality and water pollution control. Based on the research, it showed that the existing condition of physical, chemical and biological parameters were still good to be used for natural fisheries. Based on TSI index, it classified as eutrofic water. Furthermore, tilapia fish (Oreochromis mossambicus), nile tilapia (Oreochromis niloticus) tawes (Barbonymus gonionotus) and carper fish (Cyprinus carpio) were considered as best species for natural fisheries in Jatibarang Reservoir.

  18. Using HMXBs to Probe Massive Binary Evolution

    NASA Astrophysics Data System (ADS)

    Garofali, Kristen

    2017-09-01

    We propose using deep archival Chandra data of M33 to characterize the distribution of physical parameters for the high-mass X-ray binary (HMXB) population from X-ray spectra, X-ray lightcurves, and identified optical counterparts coupled with ground-based spectroscopy. Our analysis will provide the largest clean sample of HMXBs in M33, including hardness, short- and long-term variability, luminosity, and ages. These measurements will be compared across M33 and to HMXB studies in other nearby galaxies to test correlations between HMXB population and host properties such as metallicity and star formation rate. Furthermore, our measurements will yield empirical constraints on prescriptions for models of the formation and evolution of massive stars in binaries.

  19. Characterizing ceramics and the interfacial adhesion to resin: I - The relationship of microstructure, composition, properties and fractography.

    PubMed

    Della Bona, Alvaro

    2005-03-01

    The appeal of ceramics as structural dental materials is based on their light weight, high hardness values, chemical inertness, and anticipated unique tribological characteristics. A major goal of current ceramic research and development is to produce tough, strong ceramics that can provide reliable performance in dental applications. Quantifying microstructural parameters is important to develop structure/property relationships. Quantitative microstructural analysis provides an association among the constitution, physical properties, and structural characteristics of materials. Structural reliability of dental ceramics is a major factor in the clinical success of ceramic restorations. Complex stress distributions are present in most practical conditions and strength data alone cannot be directly extrapolated to predict structural performance.

  20. Aerosol physical properties in the stratosphere (APPS) radiometer design

    NASA Technical Reports Server (NTRS)

    Gray, C. R.; Woodin, E. A.; Anderson, T. J.; Magee, R. J.; Karthas, G. W.

    1977-01-01

    The measurement concepts and radiometer design developed to obtain earth-limb spectral radiance measurements for the Aerosol Physical Properties in the Stratosphere (APPS) measurement program are presented. The measurements made by a radiometer of this design can be inverted to yield vertical profiles of Rayleigh scatterers, ozone, nitrogen dioxide, aerosol extinction, and aerosol physical properties, including a Junge size-distribution parameter, and a real and imaginary index of refraction. The radiometer design provides the capacity for remote sensing of stratospheric constituents from space on platforms such as the space shuttle and satellites, and therefore provides for global measurements on a daily basis.

Top