NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
NASA Astrophysics Data System (ADS)
Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.
2017-10-01
In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
Statistical prescission point model of fission fragment angular distributions
NASA Astrophysics Data System (ADS)
John, Bency; Kataria, S. K.
1998-03-01
In light of recent developments in fission studies such as slow saddle to scission motion and spin equilibration near the scission point, the theory of fission fragment angular distribution is examined and a new statistical prescission point model is developed. The conditional equilibrium of the collective angular bearing modes at the prescission point, which is guided mainly by their relaxation times and population probabilities, is taken into account in the present model. The present model gives a consistent description of the fragment angular and spin distributions for a wide variety of heavy and light ion induced fission reactions.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Models for disaster relief shelter location and supply routing.
DOT National Transportation Integrated Search
2013-01-01
This project focuses on the development of a natural disaster response planning model that determines where to locate points of distribution for relief supplies after a disaster occurs. Advance planning (selecting locations for points of distribution...
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.
2009-04-01
Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
A second generation distributed point polarizable water model.
Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D
2010-01-07
A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
NASA Astrophysics Data System (ADS)
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
An interpretation model of GPR point data in tunnel geological prediction
NASA Astrophysics Data System (ADS)
He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya
2017-02-01
GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Dall'Anese, Emiliano
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less
Geometry-dependent distributed polarizability models for the water molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loboda, Oleksandr; Ingrosso, Francesca; Ruiz-López, Manuel F.
2016-01-21
Geometry-dependent distributed polarizability models have been constructed by fits to ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set for the water molecule in the field of a point charge. The investigated models include (i) charge-flow polarizabilities between chemically bonded atoms, (ii) isotropic or anisotropic dipolar polarizabilities on oxygen atom or on all atoms, and (iii) combinations of models (i) and (ii). For each model, the polarizability parameters have been optimized to reproduce the induction energy of a water molecule polarized by a point charge successivelymore » occupying a grid of points surrounding the molecule. The quality of the models is ascertained by examining their ability to reproduce these induction energies as well as the molecular dipolar and quadrupolar polarizabilities. The geometry dependence of the distributed polarizability models has been explored by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For each considered model, the distributed polarizability components have been fitted as a function of the geometry by a Taylor expansion in monomer coordinate displacements up to the sum of powers equal to 4.« less
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner
2015-01-01
Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...
NASA Astrophysics Data System (ADS)
Ali Saif, M.; Gade, Prashant M.
2009-03-01
Pareto law, which states that wealth distribution in societies has a power-law tail, has been the subject of intensive investigations in the statistical physics community. Several models have been employed to explain this behavior. However, most of the agent based models assume the conservation of number of agents and wealth. Both these assumptions are unrealistic. In this paper, we study the limiting wealth distribution when one or both of these assumptions are not valid. Given the universality of the law, we have tried to study the wealth distribution from the asset exchange models point of view. We consider models in which (a) new agents enter the market at a constant rate (b) richer agents fragment with higher probability introducing newer agents in the system (c) both fragmentation and entry of new agents is taking place. While models (a) and (c) do not conserve total wealth or number of agents, model (b) conserves total wealth. All these models lead to a power-law tail in the wealth distribution pointing to the possibility that more generalized asset exchange models could help us to explain the emergence of a power-law tail in wealth distribution.
A random wave model for the Aharonov-Bohm effect
NASA Astrophysics Data System (ADS)
Houston, Alexander J. H.; Gradhand, Martin; Dennis, Mark R.
2017-05-01
We study an ensemble of random waves subject to the Aharonov-Bohm effect. The introduction of a point with a magnetic flux of arbitrary strength into a random wave ensemble gives a family of wavefunctions whose distribution of vortices (complex zeros) is responsible for the topological phase associated with the Aharonov-Bohm effect. Analytical expressions are found for the vortex number and topological charge densities as functions of distance from the flux point. Comparison is made with the distribution of vortices in the isotropic random wave model. The results indicate that as the flux approaches half-integer values, a vortex with the same sign as the fractional part of the flux is attracted to the flux point, merging with it in the limit of half-integer flux. We construct a statistical model of the neighbourhood of the flux point to study how this vortex-flux merger occurs in more detail. Other features of the Aharonov-Bohm vortex distribution are also explored.
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Klaas, Dua K. S. Y.; Imteaz, Monzur Alam
2017-09-01
A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1989-01-01
The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
Distribution of Model-based Multipoint Heterogeneity Lod Scores
Xing, Chao; Morris, Nathan; Xing, Guan
2011-01-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
Chalghaf, Bilel; Chlif, Sadok; Mayala, Benjamin; Ghawar, Wissem; Bettaieb, Jihène; Harrabi, Myriam; Benie, Goze Bertin; Michael, Edwin; Salah, Afif Ben
2016-01-01
Cutaneous leishmaniasis is a very complex disease involving multiple factors that limit its emergence and spatial distribution. Prediction of cutaneous leishmaniasis epidemics in Tunisia remains difficult because most of the epidemiological tools used so far are descriptive in nature and mainly focus on a time dimension. The purpose of this work is to predict the potential geographic distribution of Phlebotomus papatasi and zoonotic cutaneous leishmaniasis caused by Leishmania major in Tunisia using Grinnellian ecological niche modeling. We attempted to assess the importance of environmental factors influencing the potential distribution of P. papatasi and cutaneous leishmaniasis caused by L. major. Vectors were trapped in central Tunisia during the transmission season using CDC light traps (John W. Hock Co., Gainesville, FL). A global positioning system was used to record the geographical coordinates of vector occurrence points and households tested positive for cutaneous leishmaniasis caused by L. major. Nine environmental layers were used as predictor variables to model the P. papatasi geographical distribution and five variables were used to model the L. major potential distribution. Ecological niche modeling was used to relate known species' occurrence points to values of environmental factors for these same points to predict the presence of the species in unsampled regions based on the value of the predictor variables. Rainfall and temperature contributed the most as predictors for sand flies and human case distributions. Ecological niche modeling anticipated the current distribution of P. papatasi with the highest suitability for species occurrence in the central and southeastern part of Tunisian. Furthermore, our study demonstrated that governorates of Gafsa, Sidi Bouzid, and Kairouan are at highest epidemic risk. PMID:26856914
Chalghaf, Bilel; Chlif, Sadok; Mayala, Benjamin; Ghawar, Wissem; Bettaieb, Jihène; Harrabi, Myriam; Benie, Goze Bertin; Michael, Edwin; Salah, Afif Ben
2016-04-01
Cutaneous leishmaniasis is a very complex disease involving multiple factors that limit its emergence and spatial distribution. Prediction of cutaneous leishmaniasis epidemics in Tunisia remains difficult because most of the epidemiological tools used so far are descriptive in nature and mainly focus on a time dimension. The purpose of this work is to predict the potential geographic distribution of Phlebotomus papatasi and zoonotic cutaneous leishmaniasis caused by Leishmania major in Tunisia using Grinnellian ecological niche modeling. We attempted to assess the importance of environmental factors influencing the potential distribution of P. papatasi and cutaneous leishmaniasis caused by L. major. Vectors were trapped in central Tunisia during the transmission season using CDC light traps (John W. Hock Co., Gainesville, FL). A global positioning system was used to record the geographical coordinates of vector occurrence points and households tested positive for cutaneous leishmaniasis caused by L. major. Nine environmental layers were used as predictor variables to model the P. papatasi geographical distribution and five variables were used to model the L. major potential distribution. Ecological niche modeling was used to relate known species' occurrence points to values of environmental factors for these same points to predict the presence of the species in unsampled regions based on the value of the predictor variables. Rainfall and temperature contributed the most as predictors for sand flies and human case distributions. Ecological niche modeling anticipated the current distribution of P. papatasi with the highest suitability for species occurrence in the central and southeastern part of Tunisian. Furthermore, our study demonstrated that governorates of Gafsa, Sidi Bouzid, and Kairouan are at highest epidemic risk. © The American Society of Tropical Medicine and Hygiene.
Queueing analysis of a canonical model of real-time multiprocessors
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.
1983-01-01
A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.
Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto
2005-08-01
Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.
Renyi Entropies in Particle Cascades
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Ostruszka, A.
2003-01-01
Renyi entropies for particle distributions following from the general cascade models are discussed. The p-model and the β distribution introduced in earlier studies of cascades are discussed in some detail. Some phenomenological consequences are pointed out.
Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation
NASA Astrophysics Data System (ADS)
Stephenson, Jerry L.; Kapraun, Chris
Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
Distribution of model-based multipoint heterogeneity lod scores.
Xing, Chao; Morris, Nathan; Xing, Guan
2010-12-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Rotational isomerism of molecules in condensed phases
NASA Astrophysics Data System (ADS)
Sakka, Tetsuo; Iwasaki, Matae; Ogata, Yukio
1991-08-01
A statistical mechanical model is developed for the description of the conformational distribution of organic molecules in the liquid and solid phases. In the model, they are assumed to have one internal freedom of rotation. The molecules are fixed to lattice sites and have two types of ordering, conformational and distributional. The latter is supposed to represent an ordering typical of solid state. The model is compared with the experimental results of the rotational-isomeric ratio of 1,2-dichloro-1,1-difluoroethane, in the temperature range from 77 to 300 K. It explains successfully the experimental results, especially the behavior near the melting point. From the point of view of melting, the present model is an extension of the Lennard-Jones and Devonshire model, because, when the distinctions between the two conformers are neglected, the parameter representing the distributional ordering of the molecules results in the same equation as that derived from the Lennard-Jones and Devonshire model.
Information pricing based on trusted system
NASA Astrophysics Data System (ADS)
Liu, Zehua; Zhang, Nan; Han, Hongfeng
2018-05-01
Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.
2015-04-01
Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.
Modeling a distribution of point defects as misfitting inclusions in stressed solids
NASA Astrophysics Data System (ADS)
Cai, W.; Sills, R. B.; Barnett, D. M.; Nix, W. D.
2014-05-01
The chemical equilibrium distribution of point defects modeled as non-overlapping, spherical inclusions with purely positive dilatational eigenstrain in an isotropically elastic solid is derived. The compressive self-stress inside existing inclusions must be excluded from the stress dependence of the equilibrium concentration of the point defects, because it does no work when a new inclusion is introduced. On the other hand, a tensile image stress field must be included to satisfy the boundary conditions in a finite solid. Through the image stress, existing inclusions promote the introduction of additional inclusions. This is contrary to the prevailing approach in the literature in which the equilibrium point defect concentration depends on a homogenized stress field that includes the compressive self-stress. The shear stress field generated by the equilibrium distribution of such inclusions is proved to be proportional to the pre-existing stress field in the solid, provided that the magnitude of the latter is small, so that a solid containing an equilibrium concentration of point defects can be described by a set of effective elastic constants in the small-stress limit.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.
1993-01-01
The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.
Temperature distribution model for the semiconductor dew point detector
NASA Astrophysics Data System (ADS)
Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.
2001-08-01
The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.
Wavefronts for a global reaction-diffusion population model with infinite distributed delay
NASA Astrophysics Data System (ADS)
Weng, Peixuan; Xu, Zhiting
2008-09-01
We consider a global reaction-diffusion population model with infinite distributed delay which includes models of Nicholson's blowflies and hematopoiesis derived by Gurney, Mackey and Glass, respectively. The existence of monotone wavefronts is derived by using the abstract settings of functional differential equations and Schauder fixed point theory.
Integration of Heterogenous Digital Surface Models
NASA Astrophysics Data System (ADS)
Boesch, R.; Ginzler, C.
2011-08-01
The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).
Dong, Ren G; Dong, Jennie H; Wu, John Z; Rakheja, Subhash
2007-01-01
The objective of this study is to develop analytical models for simulating driving-point biodynamic responses distributed at the fingers and palm of the hand under vibration along the forearm direction (z(h)-axis). Two different clamp-like model structures are formulated to analyze the distributed responses at the fingers-handle and palm-handle interfaces, as opposed to the single driving point invariably considered in the reported models. The parameters of the proposed four- and five degrees-of-freedom models are identified through minimization of an rms error function of the model and measured responses under different hand actions, namely, fingers pull, push only, grip only, and combined push and grip. The results show that the responses predicted from both models agree reasonably well with the measured data in terms of distributed as well total impedance magnitude and phase. The variations in the identified model parameters under different hand actions are further discussed in view of the biological system behavior. The proposed models are considered to serve as useful tools for design and assessment of vibration isolation methods, and for developing a hand-arm simulator for vibration analysis of power tools.
Evaluation of Rock Surface Characterization by Means of Temperature Distribution
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.
2017-12-01
Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
NASA Astrophysics Data System (ADS)
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Models of primary runaway electron distribution in the runaway vortex regime
Guo, Zehua; Tang, Xian-Zhu; McDevitt, Christopher J.
2017-11-01
Generation of runaway electrons (RE) beams can possibly induce the most deleterious effect of tokamak disruptions. A number of recent numerical calculations have confirmed the formation of a RE bump in their energy distribution by taking into account Synchrontron radiational damping force due to RE’s gyromotions. Here, we present a detailed examination on how the bump location changes at different pitch-angle and the characteristics of the RE pitch-angle distribution. Although REs moving along the magnetic field are preferably accelerated and then populate the phase-space of larger pitch-angle mainly through diffusions, an off-axis peak can still form due to the presencemore » of the vortex structure which causes accumulation of REs at low pitch-angle. A simplified Fokker- Planck model and its semi-analytical solutions based on local expansions around the O point is used to illustrate the characteristics of RE distribution around the O point of the runaway vortex in phase-space. The calculated energy location of the O point together with the local energy and pitch-angle distributions agree with the full numerical solution.« less
Large Scale Ice Water Path and 3-D Ice Water Content
Liu, Guosheng
2008-01-15
Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.
Anomalous polymer collapse winding angle distributions
NASA Astrophysics Data System (ADS)
Narros, A.; Owczarek, A. L.; Prellberg, T.
2018-03-01
In two dimensions polymer collapse has been shown to be complex with multiple low temperature states and multi-critical points. Recently, strong numerical evidence has been provided for a long-standing prediction of universal scaling of winding angle distributions, where simulations of interacting self-avoiding walks show that the winding angle distribution for N-step walks is compatible with the theoretical prediction of a Gaussian with a variance growing asymptotically as Clog N . Here we extend this work by considering interacting self-avoiding trails which are believed to be a model representative of some of the more complex behaviour. We provide robust evidence that, while the high temperature swollen state of this model has a winding angle distribution that is also Gaussian, this breaks down at the polymer collapse point and at low temperatures. Moreover, we provide some evidence that the distributions are well modelled by stretched/compressed exponentials, in contradistinction to the behaviour found in interacting self-avoiding walks. Dedicated to Professor Stu Whittington on the occasion of his 75th birthday.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.
SAMICS marketing and distribution model
NASA Technical Reports Server (NTRS)
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2016-10-03
A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.
One-dimensional gravity in infinite point distributions.
Gabrielli, A; Joyce, M; Sicard, F
2009-10-01
The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by "Jeans swindle" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from "shuffled lattice" initial conditions. These show qualitative properties of the evolution (notably its "self-similarity") like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
NASA Astrophysics Data System (ADS)
WANG, J.
2017-12-01
In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
A more accurate modeling of the effects of actuators in large space structures
NASA Technical Reports Server (NTRS)
Hablani, H. B.
1981-01-01
The paper deals with finite actuators. A nonspinning three-axis stabilized space vehicle having a two-dimensional large structure and a rigid body at the center is chosen for analysis. The torquers acting on the vehicle are modeled as antisymmetric forces distributed in a small but finite area. In the limit they represent point torquers which also are treated as a special case of surface distribution of dipoles. Ordinary and partial differential equations governing the forced vibrations of the vehicle are derived by using Hamilton's principle. Associated modal inputs are obtained for both the distributed moments and the distributed forces. It is shown that the finite torquers excite the higher modes less than the point torquers. Modal cost analysis proves to be a suitable methodology to this end.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Satake, K.; Goto, T.; Takahashi, T.
2016-12-01
Estimating tsunami amplitude from tsunami sand deposit has been a challenge. The grain size distribution of tsunami sand deposit may have correlation with tsunami inundation process, and further with its source characteristics. In order to test this hypothesis, we need a tsunami sediment transport model that can accurately estimate grain size distribution of tsunami deposit. Here, we built and validate a tsunami sediment transport model that can simulate grain size distribution. Our numerical model has three layers which are suspended load layer, active bed layer, and parent bed layer. The two bed layers contain information about the grain size distribution. This numerical model can handle a wide range of grain sizes from 0.063 (4 ϕ) to 5.657 mm (-2.5 ϕ). We apply the numerical model to simulate the sedimentation process during the 2011 Tohoku earthquake in Numanohama, Iwate prefecture, Japan. The grain size distributions at 15 sample points along a 900 m transect from the beach are used to validate the tsunami sediment transport model. The tsunami deposits are dominated by coarse sand with diameter of 0.5 - 1 mm and their thickness are up to 25 cm. Our tsunami model can well reproduce the observed tsunami run-ups that are ranged from 16 to 34 m along the steep valley in Numanohama. The shapes of the simulated grain size distributions at many sample points located within 300 m from the shoreline are similar to the observations. The differences between observed and simulated peak of grain size distributions are less than 1 ϕ. Our result also shows that the simulated sand thickness distribution along the transect is consistent with the observation.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K
2015-10-15
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
NASA Astrophysics Data System (ADS)
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
NASA Astrophysics Data System (ADS)
Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey
2017-04-01
Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-
ERIC Educational Resources Information Center
Litchfield, Carolyn G.
A project was conducted to develop a model for evaluating specialized and traditional programs in marketing and distributive education. The project included a review of literature containing information regarding the points of view expressed by advocates of the specialized, traditional, and middle-of-the-road approaches to program planning in…
Microphysical Processes Affecting the Pinatubo Volcanic Plume
NASA Technical Reports Server (NTRS)
Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia
1996-01-01
In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.
Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.
Decker, Gifford Z; Thomson, Scott L
2007-05-01
The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.
Leherte, Laurence; Vercauteren, Daniel P
2014-02-01
Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.
Information Interaction Study for DER and DMS Interoperability
NASA Astrophysics Data System (ADS)
Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui
The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.
Transition Effects on Heating in the Wake of a Blunt Body
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Perkins, John N.
1997-01-01
A series of aerodynamic heating tests was conducted on a 70-deg sphere-cone planetary entry vehicle model in a Mach 10 perfect-gas wind tunnel at freestream Reynolds numbers based on diameter of 8.23x104 to 3.15x105. Surface heating distributions were determined from temperature time-histories measured on the model and on its support sting using thin-film resistance gages. The experimental heating data were compared to computations made using an axisymmetric/2D, laminar, perfect-gas Navier-Stokes solver. Agreement between computational and experimental heating distributions to within, or slightly greater than, the experimental uncertainty was obtained on the forebody and afterbody of the entry vehicle as well as on the sting upstream of the free-shear-layer reattachment point. However, the distributions began to diverge near the reattachment point, with the experimental heating becoming increasingly greater than the computed heating with distance downstream from the reattachment point. It was concluded that this divergence was due to transition of the wake free shear layer just upstream of the reattachment point on the sting.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Ribosome flow model with positive feedback
Margaliot, Michael; Tuller, Tamir
2013-01-01
Eukaryotic mRNAs usually form a circular structure; thus, ribosomes that terminatae translation at the 3′ end can diffuse with increased probability to the 5′ end of the transcript, initiating another cycle of translation. This phenomenon describes ribosomal flow with positive feedback—an increase in the flow of ribosomes terminating translating the open reading frame increases the ribosomal initiation rate. The aim of this paper is to model and rigorously analyse translation with feedback. We suggest a modified version of the ribosome flow model, called the ribosome flow model with input and output. In this model, the input is the initiation rate and the output is the translation rate. We analyse this model after closing the loop with a positive linear feedback. We show that the closed-loop system admits a unique globally asymptotically stable equilibrium point. From a biophysical point of view, this means that there exists a unique steady state of ribosome distributions along the mRNA, and thus a unique steady-state translation rate. The solution from any initial distribution will converge to this steady state. The steady-state distribution demonstrates a decrease in ribosome density along the coding sequence. For the case of constant elongation rates, we obtain expressions relating the model parameters to the equilibrium point. These results may perhaps be used to re-engineer the biological system in order to obtain a desired translation rate. PMID:23720534
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Hanqing; Fu Zhiguo; Lu Xiaoguang
Guided by the sedimentation theory and knowledge of modern and ancient fluvial deposition and utilizing the abundant information of sedimentary series, microfacies type and petrophysical parameters from well logging curves of close spaced thousands of wells located in a large area. A new method for establishing detailed sedimentation and permeability distribution models for fluvial reservoirs have been developed successfully. This study aimed at the geometry and internal architecture of sandbodies, in accordance to their hierarchical levels of heterogeneity and building up sedimentation and permeability distribution models of fluvial reservoirs, describing the reservoir heterogeneity on the light of the river sedimentarymore » rules. The results and methods obtained in outcrop and modem sedimentation studies have successfully supported the study. Taking advantage of this method, the major producing layers (PI{sub 1-2}), which have been considered as heterogeneous and thick fluvial reservoirs extending widely in lateral are researched in detail. These layers are subdivided into single sedimentary units vertically and the microfacies are identified horizontally. Furthermore, a complex system is recognized according to their hierarchical levels from large to small, meander belt, single channel sandbody, meander scroll, point bar, and lateral accretion bodies of point bar. The achieved results improved the description of areal distribution of point bar sandbodies, provide an accurate and detailed framework model for establishing high resolution predicting model. By using geostatistic technique, it also plays an important role in searching for enriched zone of residual oil distribution.« less
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
Development and evaluation of spatial point process models for epidermal nerve fibers.
Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila
2013-06-01
We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.
Empirical comparison of heuristic load distribution in point-to-point multicomputer networks
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.
1990-01-01
The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.
Alkhaldy, Ibrahim
2017-04-01
The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.
Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.
Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
The effect of wing dihedral and section suction distribution on vortex bursting
NASA Technical Reports Server (NTRS)
Washburn, K. E.; Gloss, B. B.
1975-01-01
Eleven semi-span wing models were tested in the 1/8-scale model of the Langley V/STOL tunnel to qualitatively study vortex bursting. Flow visualization was achieved by using helium filled soap bubbles introduced upstream of the model. The angle of attack range was from 0 deg to 45 deg. The results show that the vortex is unstable, that is, the bursting point location is not fixed at a given angle of attack but moves within certain bounds. Upstream of the trailing edge, the bursting point location has a range of two inches; downstream, the range is about six inches. Anhedral and dihedral appear to have an insignificant effect on the vortex and its bursting point location. Altering the section suction distribution by improving the triangularity generally increases the angle of attack at which vortex bursting occurs at the trailing edge.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
Ecotoxicology and spatial modeling in population dynamics: an illustration with brown trout.
Chaumot, Arnaud; Charles, Sandrine; Flammarion, Patrick; Auger, Pierre
2003-05-01
We developed a multiregion matrix population model to explore how the demography of a hypothetical brown trout population living in a river network varies in response to different spatial scenarios of cadmium contamination. Age structure, spatial distribution, and demographic and migration processes are taken into account in the model. Chronic or acute cadmium concentrations affect the demographic parameters at the scale of the river range. The outputs of the model constitute population-level end points (the asymptotic population growth rate, the stable age structure, and the asymptotic spatial distribution) that allow comparing the different spatial scenarios of contamination regarding the demographic response at the scale of the whole river network. An analysis of the sensitivity of these end points to lower order parameters enables us to link the local effects of cadmium to the global demographic behavior of the brown trout population. Such a link is of broad interest in the point of view of ecotoxicological management.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the fourth power fixed effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F distribution, on an order statistics distribution of Cochran's, and on a combination of the two. Results of the comparisons and a recommended strategy are given.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Baidar, T.; Shrestha, A. B.; Ranjit, R.; Adhikari, R.; Ghimire, S.; Shrestha, N.
2017-05-01
Mikania micrantha is one of the major invasive alien plant species in tropical moist forest regions of Asia including Nepal. Recently, this weed is spreading at an alarming rate in Chitwan National Park (CNP) and threatening biodiversity. This paper aims to assess the impacts of Mikania micrantha on different land cover and to predict potential invasion sites in CNP using Maxent model. Primary data for this were presence point coordinates and perceived Mikania micrantha cover collected through systematic random sampling technique. Rapideye image, Shuttle Radar Topographic Mission data and bioclimatic variables were acquired as secondary data. Mikania micrantha distribution maps were prepared by overlaying the presence points on image classified by object based image analysis. The overall accuracy of classification was 90 % with Kappa coefficient 0.848. A table depicting the number of sample points in each land cover with respective Mikania micrantha coverage was extracted from the distribution maps to show the impact. The riverine forest was found to be the most affected land cover with 85.98 % presence points and sal forest was found to be very less affected with only 17.02 % presence points. Maxent modeling predicted the areas near the river valley as the potential invasion sites with statistically significant Area Under the Receiver Operating Curve (AUC) value of 0.969. Maximum temperature of warmest month and annual precipitation were identified as the predictor variables that contribute the most to Mikania micrantha's potential distribution.
A 3D gravity and magnetic model for the Entenschnabel area (German North Sea)
NASA Astrophysics Data System (ADS)
Dressel, Ingo; Barckhausen, Udo; Heyde, Ingo
2018-01-01
In this study, we focus on structural configuration of the Entenschnabel area, a part of the German exclusive economic zone within the North Sea, by means of gravity and magnetic modelling. The starting point of the 3D modelling approach is published information on subseafloor structures for shallow depths, acquired by wells and seismic surveys. Subsequent gravity and magnetic modelling of the structures of the deeper subsurface builds on this geophysical and geological information and on gravity and magnetic data acquired during a research cruise to the Entenschnabel area. On the one hand, our 3D model shows the density and susceptibility distribution of the sediments and the crust. In addition, the potential field modelling provides evidence for a differentiation between lower and upper crust. The thickness distribution of the crust is also discussed with respect to the tectonic framework. Furthermore, gravity as well as magnetic modelling points to an intrusive complex beneath the Central Graben within the Entenschnabel area. On the other hand, this work provides a geological-geophysical consistent 3D gravity and magnetic model that can be used as a starting point for further investigation of this part of the German North Sea.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
Danish Passage Graves, "Spring/Summer/Fall full Moons" and Lunar Standstills
NASA Astrophysics Data System (ADS)
Clausen, Claus Jørgen
2015-05-01
The author proposes and discusses a model for azimuth distribution which involves the criterion of a 'spring full moon' (or a 'fall full moon') proposed by Marciano Da Silva (Da Silva 2004). The model is based on elements of the rising pattern of the summer full moon combined with directions pointing towards full moonrises which occur immediately prior to lunar standstill eclipses and directions aimed at the points at which these eclipses begin. An observed sample of 153 directions has been compared with the proposed model, which has been named the lunar 'season pointer'. Statistical tests show that the model fits well with the observed sample within the azimuth interval of 54.5° to 156.5°. The conclusion made is that at least the 'season pointer' section of the model used could very well explain the observed distribution.
NASA Astrophysics Data System (ADS)
Gaona Garcia, J.; Lewandowski, J.; Bellin, A.
2017-12-01
Groundwater-stream water interactions in rivers determine water balances, but also chemical and biological processes in the streambed at different spatial and temporal scales. Due to the difficult identification and quantification of gaining, neutral and losing conditions, it is necessary to combine techniques with complementary capabilities and scale ranges. We applied this concept to a study site at the River Schlaube, East Brandenburg-Germany, a sand bed stream with intense sediment heterogeneity and complex environmental conditions. In our approach, point techniques such as temperature profiles of the streambed together with vertical hydraulic gradients provide data for the estimation of fluxes between groundwater and surface water with the numerical model 1DTempPro. On behalf of distributed techniques, fiber optic distributed temperature sensing identifies the spatial patterns of neutral, down- and up-welling areas by analysis of the changes in the thermal patterns at the streambed interface under certain flow. The study finally links point and surface temperatures to provide a method for upscaling of fluxes. Point techniques provide point flux estimates with essential depth detail to infer streambed structures while the results hardly represent the spatial distribution of fluxes caused by the heterogeneity of streambed properties. Fiber optics proved capable of providing spatial thermal patterns with enough resolution to observe distinct hyporheic thermal footprints at multiple scales. The relation of thermal footprint patterns and temporal behavior with flux results from point techniques enabled the use of methods for spatial flux estimates. The lack of detailed information of the physical driver's spatial distribution restricts the spatial flux estimation to the application of the T-proxy method, whose highly uncertain results mainly provide coarse spatial flux estimates. The study concludes that the upscaling of groundwater-stream water interactions using thermal measurements with combined point and distributed techniques requires the integration of physical drivers because of the heterogeneity of the flux patterns. Combined experimental and modeling approaches may help to obtain more reliable understanding of groundwater-surface water interactions at multiple scales.
NASA Astrophysics Data System (ADS)
Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain
2016-04-01
The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.
Dorazio, Robert M.
2012-01-01
Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.
NASA Astrophysics Data System (ADS)
Abe, T.; Takahashi, T.; Shirai, K.
2017-02-01
In order to reveal a steady distribution structure of point defects of no growing Si on the solid-liquid interface, the crystals were grown at a high pulling rate, which Vs becomes predominant, and the pulling was suddenly stopped. After restoring the variations of the crystal by the pulling-stop, the crystals were then left in prolonged contact with the melt. Finally, the crystals were detached and rapidly cooled to freeze point defects and then a distribution of the point defects of the as-grown crystals was observed. As a result, a dislocation loop (DL) region, which is formed by the aggregation of interstitials (Is), was formed over the solid-liquid interface and was surrounded with a Vs-and-Is-free recombination region (Rc-region), although the entire crystals had been Vs rich in the beginning. It was also revealed that the crystal on the solid-liquid interface after the prolonged contact with the melt can partially have a Rc-region to be directly in contact with the melt, unlike a defect distribution of a solid-liquid interface that has been growing. This experimental result contradicts a hypothesis of Voronkov's diffusion model, which always assumes the equilibrium concentrations of Vs and Is as the boundary condition for distribution of point defects on the growth interface. The results were disscussed from a qualitative point of view of temperature distribution and thermal stress by the pulling-stop.
Wakie, Tewodros; Evangelista, Paul H.; Jarnevich, Catherine S.; Laituri, Melinda
2014-01-01
We used correlative models with species occurrence points, Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indices, and topo-climatic predictors to map the current distribution and potential habitat of invasive Prosopis juliflora in Afar, Ethiopia. Time-series of MODIS Enhanced Vegetation Indices (EVI) and Normalized Difference Vegetation Indices (NDVI) with 250 m2 spatial resolution were selected as remote sensing predictors for mapping distributions, while WorldClim bioclimatic products and generated topographic variables from the Shuttle Radar Topography Mission product (SRTM) were used to predict potential infestations. We ran Maxent models using non-correlated variables and the 143 species-occurrence points. Maxent generated probability surfaces were converted into binary maps using the 10-percentile logistic threshold values. Performances of models were evaluated using area under the receiver-operating characteristic (ROC) curve (AUC). Our results indicate that the extent of P. juliflora invasion is approximately 3,605 km2 in the Afar region (AUC = 0.94), while the potential habitat for future infestations is 5,024 km2 (AUC = 0.95). Our analyses demonstrate that time-series of MODIS vegetation indices and species occurrence points can be used with Maxent modeling software to map the current distribution of P. juliflora, while topo-climatic variables are good predictors of potential habitat in Ethiopia. Our results can quantify current and future infestations, and inform management and policy decisions for containing P. juliflora. Our methods can also be replicated for managing invasive species in other East African countries.
Evaluating the spatial distribution of water balance in a small watershed, Pennsylvania
NASA Astrophysics Data System (ADS)
Yu, Zhongbo; Gburek, W. J.; Schwartz, F. W.
2000-04-01
A conceptual water-balance model was modified from a point application to be distributed for evaluating the spatial distribution of watershed water balance based on daily precipitation, temperature and other hydrological parameters. The model was calibrated by comparing simulated daily variation in soil moisture with field observed data and results of another model that simulates the vertical soil moisture flow by numerically solving Richards' equation. The impacts of soil and land use on the hydrological components of the water balance, such as evapotranspiration, soil moisture deficit, runoff and subsurface drainage, were evaluated with the calibrated model in this study. Given the same meteorological conditions and land use, the soil moisture deficit, evapotranspiration and surface runoff increase, and subsurface drainage decreases, as the available water capacity of soil increases. Among various land uses, alfalfa produced high soil moisture deficit and evapotranspiration and lower surface runoff and subsurface drainage, whereas soybeans produced an opposite trend. The simulated distribution of various hydrological components shows the combined effect of soil and land use. Simulated hydrological components compare well with observed data. The study demonstrated that the distributed water balance approach is efficient and has advantages over the use of single average value of hydrological variables and the application at a single point in the traditional practice.
NASA Astrophysics Data System (ADS)
Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.
2017-06-01
Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.
NASA Astrophysics Data System (ADS)
Okyay, U.; Glennie, C. L.; Khan, S.
2017-12-01
Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
SU-F-P-21: Study of Dosimetry Accuracy of Small Passively Scattered Proton Beam Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Gautam, A; Kerr, M
2016-06-15
Purpose: To study the accuracy of the dose distribution of very small irregular fields of passively scattered proton beams calculated by the analytical pencil beam model of the Eclipse treatment planning system (TPS). Methods: An irregular field with a narrow region (width < 1 cm) that was used for the treatment of a small volume adjacent to a previously treated area were chosen for this investigation. Point doses at different locations inside the field were measured with a small volume ion chamber (A26, Standard Imaging). 2-D dose distributions were measured using a 2-D ion chamber array (MatriXX, IBA). All themore » measurements were done in plastic water phantom. The measured dose distributions were compared with the verification plan dose calculated in a water like phantom for the patient treatment field without the use of the compensator. Results: Point doses measured with the ion chamber in the narrowest section of the field were found to differ as much as 10% from the Eclipse calculated dose at some of the points. The 2-D dose distribution measured with the MatriXX which was validated by comparison with limited film measurement, at the proximal 95%, center of the spread out Bragg Peak and distal 90% depths agreed reasonably well with the TPS calculated dose distribution with more than 92% of the pixels passing the 2% / 2 mm dose distance agreement. Conclusion: The dose calculated by the pencil beam model of the Eclipse TPS for narrow irregular fields may not be accurate within 5% at some locations of the field, especially at the points close to the field edge due to the limitation of the dose calculation model. Overall accuracy of the calculated 2-D dose distribution was found to be acceptable for the 2%/2 mm dose/distance agreement with the measurement.« less
NASA Astrophysics Data System (ADS)
Dmochowski, Jacek P.; Bikson, Marom; Parra, Lucas C.
2012-10-01
Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Radial Distribution of X-Ray Point Sources Near the Galactic Center
NASA Astrophysics Data System (ADS)
Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas
2009-11-01
We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.
On joint subtree distributions under two evolutionary models.
Wu, Taoyang; Choi, Kwok Pui
2016-04-01
In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimal topology to minimizing congestion in connected communication complex network
NASA Astrophysics Data System (ADS)
Benyoussef, M.; Ez-Zahraouy, H.; Benyoussef, A.
In this paper, a new model of the interdependent complex network is proposed, based on two assumptions that (i) the capacity of a node depends on its degree, and (ii) the traffic load depends on the distribution of the links in the network. Based on these assumptions, the presented model proposes a method of connection not based on the node having a higher degree but on the region containing hubs. It is found that the final network exhibits two kinds of degree distribution behavior, depending on the kind and the way of the connection. This study reveals a direct relation between network structure and traffic flow. It is found that pc the point of transition between the free flow and the congested phase depends on the network structure and the degree distribution. Moreover, this new model provides an improvement in the traffic compared to the results found in a single network. The same behavior of degree distribution found in a BA network and observed in the real world is obtained; except for this model, the transition point between the free phase and congested phase is much higher than the one observed in a network of BA, for both static and dynamic protocols.
Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran
2012-07-01
1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-01-01
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
Assessing the Application of a Geographic Presence-Only Model for Land Suitability Mapping
Heumann, Benjamin W.; Walsh, Stephen J.; McDaniel, Phillip M.
2011-01-01
Recent advances in ecological modeling have focused on novel methods for characterizing the environment that use presence-only data and machine-learning algorithms to predict the likelihood of species occurrence. These novel methods may have great potential for land suitability applications in the developing world where detailed land cover information is often unavailable or incomplete. This paper assesses the adaptation and application of the presence-only geographic species distribution model, MaxEnt, for agricultural crop suitability mapping in a rural Thailand where lowland paddy rice and upland field crops predominant. To assess this modeling approach, three independent crop presence datasets were used including a social-demographic survey of farm households, a remote sensing classification of land use/land cover, and ground control points, used for geodetic and thematic reference that vary in their geographic distribution and sample size. Disparate environmental data were integrated to characterize environmental settings across Nang Rong District, a region of approximately 1,300 sq. km in size. Results indicate that the MaxEnt model is capable of modeling crop suitability for upland and lowland crops, including rice varieties, although model results varied between datasets due to the high sensitivity of the model to the distribution of observed crop locations in geographic and environmental space. Accuracy assessments indicate that model outcomes were influenced by the sample size and the distribution of sample points in geographic and environmental space. The need for further research into accuracy assessments of presence-only models lacking true absence data is discussed. We conclude that the Maxent model can provide good estimates of crop suitability, but many areas need to be carefully scrutinized including geographic distribution of input data and assessment methods to ensure realistic modeling results. PMID:21860606
West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-10-11
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
Non-hoop winding effect on bonding temperature of laser assisted tape winding process
NASA Astrophysics Data System (ADS)
Zaami, Amin; Baran, Ismet; Akkerman, Remko
2018-05-01
One of the advanced methods for production of thermoplastic composite methods is laser assisted tape winding (LATW). Predicting the temperature in LATW process is very important since the temperature at nip-point (bonding line through width) plays a pivotal role in a proper bonding and hence the mechanical performance. Despite the hoop-winding where the nip-point is the straight line, non-hoop winding includes a curved nip-point line. Hence, the non-hoop winding causes somewhat a different power input through laser-rays and-reflections and consequently generates unknown complex temperature profile on the curved nip-point line. Investigating the temperature at the nip-point line is the point of interest in this study. In order to understand this effect, a numerical model is proposed to capture the effect of laser-rays and their reflections on the nip-point temperature. To this end, a 3D optical model considering the objects in LATW process is considered. Then, the power distribution (absorption and reflection) from the optical analysis is used as an input (heat flux distribution) for the thermal analysis. The thermal analysis employs a fully-implicit advection-diffusion model to calculate the temperature on the surfaces. The results are examined to demonstrate the effect of winding direction on the curved nip-point line (tape width) which has not been considered in literature up to now. Furthermore, the results can be used for designing a better and more efficient setup in the LATW process.
Hecker, Suzanne; Abrahamson, N.A.; Wooddell, Kathryn
2013-01-01
To investigate the nature of earthquake‐magnitude distributions on faults, we compare the interevent variability of surface displacement at a point on a fault from a composite global data set of paleoseismic observations with the variability expected from two prevailing magnitude–frequency distributions: the truncated‐exponential model and the characteristic‐earthquake model. We use forward modeling to predict the coefficient of variation (CV) for the alternative earthquake distributions, incorporating factors that would effect observations of displacement at a site. The characteristic‐earthquake model (with a characteristic‐magnitude range of ±0.25) produces CV values consistent with the data (CV∼0.5) only if the variability for a given earthquake magnitude is small. This condition implies that rupture patterns on a fault are stable, in keeping with the concept behind the model. This constraint also bears upon fault‐rupture hazard analysis, which, for lack of point‐specific information, has used global scaling relations to infer variability in average displacement for a given‐size earthquake. Exponential distributions of earthquakes (from M 5 to the maximum magnitude) give rise to CV values that are significantly larger than the empirical constraint. A version of the model truncated at M 7, however, yields values consistent with a larger CV (∼0.6) determined for small‐displacement sites. Although this result allows for a difference in the magnitude distribution of smaller surface‐rupturing earthquakes, it may reflect, in part, less stability in the displacement profile of smaller ruptures and/or the tails of larger ruptures.
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
Application of change-point problem to the detection of plant patches.
López, I; Gámez, M; Garay, J; Standovár, T; Varga, Z
2010-03-01
In ecology, if the considered area or space is large, the spatial distribution of individuals of a given plant species is never homogeneous; plants form different patches. The homogeneity change in space or in time (in particular, the related change-point problem) is an important research subject in mathematical statistics. In the paper, for a given data system along a straight line, two areas are considered, where the data of each area come from different discrete distributions, with unknown parameters. In the paper a method is presented for the estimation of the distribution change-point between both areas and an estimate is given for the distributions separated by the obtained change-point. The solution of this problem will be based on the maximum likelihood method. Furthermore, based on an adaptation of the well-known bootstrap resampling, a method for the estimation of the so-called change-interval is also given. The latter approach is very general, since it not only applies in the case of the maximum-likelihood estimation of the change-point, but it can be also used starting from any other change-point estimation known in the ecological literature. The proposed model is validated against typical ecological situations, providing at the same time a verification of the applied algorithms.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Ground deposition of liquid droplets released from a point source in the atmospheric surface layer
NASA Astrophysics Data System (ADS)
Panneton, Bernard
1989-01-01
A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.
Modeling financial markets by the multiplicative sequence of trades
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-12-01
We introduce the stochastic multiplicative point process modeling trading activity of financial markets. Such a model system exhibits power-law spectral density S(f)∝1/fβ, scaled as power of frequency for various values of β between 0.5 and 2. Furthermore, we analyze the relation between the power-law autocorrelations and the origin of the power-law probability distribution of the trading activity. The model reproduces the spectral properties of trading activity and explains the mechanism of power-law distribution in real markets.
NASA Astrophysics Data System (ADS)
Rotondi, Renata; Varini, Elisa
2016-04-01
The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Optimal Ventilation Control in Complex Urban Tunnels with Multi-Point Pollutant Discharge
DOT National Transportation Integrated Search
2017-10-01
Zhen Tan (ORCID ID 0000-0003-1711-3557) H. Oliver Gao (ORCID ID 0000-0002-7861-9634) We propose an optimal ventilation control model for complex urban vehicular tunnels with distributed pollutant discharge points. The control problem is formulated as...
Robust group-wise rigid registration of point sets using t-mixture model
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.
2016-03-01
A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.
USDA-ARS?s Scientific Manuscript database
Thirty one years of spatially distributed air temperature, relative humidity, dew point temperature, precipitation amount, and precipitation phase data are presented for the Reynolds Creek Experimental Watershed. The data are spatially distributed over a 10m Lidar-derived digital elevation model at ...
Features of development process displacement of earth’s surface when dredging coal in Eastern Donbas
NASA Astrophysics Data System (ADS)
Posylniy, Yu V.; Versilov, S. O.; Shurygin, D. N.; Kalinchenko, V. M.
2017-10-01
The results of studies of the process of the earth’s surface displacement due to the influence of the adjacent longwalls are presented. It is established that the actual distributions of soil subsidence in the fall and revolt of the reservoir with the same boundary settlement processes differ both from each other and by the distribution of subsidence, recommended by the rules of structures protection. The application of the new boundary criteria - the relative subsidence of 0.03 - allows one to go from two distributions to one distribution, which is also different from the sedimentation distribution of protection rules. The use of a new geometrical element - a virtual point of the mould - allows one to transform the actual distribution of subsidence in the model distribution of rules of constructions protection. When transforming the curves of subsidence, the boundary points vary and, consequently, the boundary corners do.
NASA Astrophysics Data System (ADS)
Seif, Dariush; Ghoniem, Nasr M.
2014-12-01
A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô's calculus, rate equations for the first five moments of the size distribution in helium-vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium-vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the implementation of a path-integral approach may proceed if the distribution is known experimentally to significantly stray from a Gaussian description.
A Model for Selection of Eyespots on Butterfly Wings.
Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida
2015-01-01
The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in nature. We therefore conclude that changes in the proximal boundary conditions are sufficient to explain the empirically observed distribution of eyespot focus points on the entire wing surface. The model predicts, subject to experimental verification, that the source strength of the activator at the proximal boundary should be lower in wing cells in which focus points form than in those that lack focus points. The model suggests that the number and locations of eyespot foci on the wing disc could be largely controlled by two kinds of gradients along two different directions, that is, the first one is the gradient in spatially varying parameters such as the reaction rate along the anterior-posterior direction on the proximal boundary of the wing cells, and the second one is the gradient in source values of the activator along the veins in the proximal-distal direction of the wing cell.
The Trend Odds Model for Ordinal Data‡
Capuano, Ana W.; Dawson, Jeffrey D.
2013-01-01
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520
The trend odds model for ordinal data.
Capuano, Ana W; Dawson, Jeffrey D
2013-06-15
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.
A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations
NASA Astrophysics Data System (ADS)
Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.
2017-04-01
Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.
Magnetic and gravity anomalies in the Americas
NASA Technical Reports Server (NTRS)
Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)
1981-01-01
The cleaning and magnetic tape storage of spherical Earth processing programs are reported. These programs include: NVERTSM which inverts total or vector magnetic anomaly data on a distribution of point dipoles in spherical coordinates; SMFLD which utilizes output from NVERTSM to compute total or vector magnetic anomaly fields for a distribution of point dipoles in spherical coordinates; NVERTG; and GFLD. Abstracts are presented for papers dealing with the mapping and modeling of magnetic and gravity anomalies, and with the verification of crustal components in satellite data.
Numerical modeling of laser assisted tape winding process
NASA Astrophysics Data System (ADS)
Zaami, Amin; Baran, Ismet; Akkerman, Remko
2017-10-01
Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.
NASA Astrophysics Data System (ADS)
You, Xu; Zhi-jian, Zong; Qun, Gao
2018-07-01
This paper describes a methodology for the position uncertainty distribution of an articulated arm coordinate measuring machine (AACMM). First, a model of the structural parameter uncertainties was established by statistical method. Second, the position uncertainty space volume of the AACMM in a certain configuration was expressed using a simplified definite integration method based on the structural parameter uncertainties; it was then used to evaluate the position accuracy of the AACMM in a certain configuration. Third, the configurations of a certain working point were calculated by an inverse solution, and the position uncertainty distribution of a certain working point was determined; working point uncertainty can be evaluated by the weighting method. Lastly, the position uncertainty distribution in the workspace of the ACCMM was described by a map. A single-point contrast test of a 6-joint AACMM was carried out to verify the effectiveness of the proposed method, and it was shown that the method can describe the position uncertainty of the AACMM and it was used to guide the calibration of the AACMM and the choice of AACMM’s accuracy area.
Modeling elephant-mediated cascading effects of water point closure.
Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F
2015-03-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically informed decisions in wildlife management. The results from this modeling exercise imply that long-term effects of this intervention strategy should always be investigated at an ecosystem scale.
Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.
Stone, J T; Self, R H; Howls, C J
2017-05-01
The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.
Large-scale modelling permafrost distribution in Ötztal, Pitztal and Kaunertal (Tyrol)
NASA Astrophysics Data System (ADS)
Hoinkes, S.; Sailer, R.; Lehning, M.; Steinkogler, W.
2012-04-01
Permafrost is an important element of the global cryosphere, which is seriously affected by climate change. Due to the fact that permafrost is a mostly invisible phenomenon, the area-wide distribution is not properly known. Point measurements are conducted to get information, whether permafrost is present at certain places or not. For an area wide distribution mapping, models have to be built and applied. Different kinds of permafrost distribution models already exist, which are based on different approaches and complexities. Differences in model approaches are mainly due to scaling issues, availability of input data and type of output parameters. In the presented work, we want to map and model the distribution of permafrost in the most elevated parts of the Ötztal, Pitztal and Kaunertal, which are situated in the Eastern European Alps and cover an area of approximately 750 km2. As air temperature is believed to be the best and simplest proxy for energy balance in mountainous regions, we took only the mean annual air temperature from the interpolated ÖKLIM dataset of the Central Institute of Meteorology and Geodynamics to calculate areas with possible presence of permafrost. In a second approach we took a high resolution digital elevation model (DEM) derived by air-borne laser scanning and calculated possible areas with permafrost based on elevation and aspect only which is an established approach among the permafrost community since years. These two simple approaches are compared with each other and in order to validate the model we will compare the outputs with point measurements such as temperature recorded at the snow-soil interface (BTS), continuous temperature data, rock glacier inventories, geophysical measurements. We show that the model based on the mean annual air temperature (≤ -2°C) only, would predict less permafrost in the northerly exposed slopes and in lower elevation than the model based on elevation and aspect. In the southern aspects, more permafrost areas are predicted, but the overall pattern of permafrost distribution is similar. Regarding the input parameters, their different spatial resolutions and the complex topography in high alpine terrain these differences in the results are evident. In a next step these two very simple approaches will be compared to a more complex hydro-meteorological three-dimensional simulation (ALPINE3D). First a one-dimensional model will be used to model permafrost presence at certain points and to calibrate the model parameters, further the model will be applied for the whole investigation area. The model output will be a map of probable permafrost distribution, where energy balance, topography, snow cover, (sub)surface material and land cover is playing a major role.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Rates of profit as correlated sums of random variables
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
Hessian eigenvalue distribution in a random Gaussian landscape
NASA Astrophysics Data System (ADS)
Yamada, Masaki; Vilenkin, Alexander
2018-03-01
The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/ N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
Major challenges for correlational ecological niche model projections to future climate conditions.
Peterson, A Townsend; Cobos, Marlon E; Jiménez-García, Daniel
2018-06-20
Species-level forecasts of distributional potential and likely distributional shifts, in the face of changing climates, have become popular in the literature in the past 20 years. Many refinements have been made to the methodology over the years, and the result has been an approach that considers multiple sources of variation in geographic predictions, and how that variation translates into both specific predictions and uncertainty in those predictions. Although numerous previous reviews and overviews of this field have pointed out a series of assumptions and caveats associated with the methodology, three aspects of the methodology have important impacts but have not been treated previously in detail. Here, we assess those three aspects: (1) effects of niche truncation on model transfers to future climate conditions, (2) effects of model selection procedures on future-climate transfers of ecological niche models, and (3) relative contributions of several factors (replicate samples of point data, general circulation models, representative concentration pathways, and alternative model parameterizations) to overall variance in model outcomes. Overall, the view is one of caution: although resulting predictions are fascinating and attractive, this paradigm has pitfalls that may bias and limit confidence in niche model outputs as regards the implications of climate change for species' geographic distributions. © 2018 New York Academy of Sciences.
Extravascular transport in normal and tumor tissues.
Jain, R K; Gerlowski, L E
1986-01-01
The transport characteristics of the normal and tumor tissue extravascular space provide the basis for the determination of the optimal dosage and schedule regimes of various pharmacological agents in detection and treatment of cancer. In order for the drug to reach the cellular space where most therapeutic action takes place, several transport steps must first occur: (1) tissue perfusion; (2) permeation across the capillary wall; (3) transport through interstitial space; and (4) transport across the cell membrane. Any of these steps including intracellular events such as metabolism can be the rate-limiting step to uptake of the drug, and these rate-limiting steps may be different in normal and tumor tissues. This review examines these transport limitations, first from an experimental point of view and then from a modeling point of view. Various types of experimental tumor models which have been used in animals to represent human tumors are discussed. Then, mathematical models of extravascular transport are discussed from the prespective of two approaches: compartmental and distributed. Compartmental models lump one or more sections of a tissue or body into a "compartment" to describe the time course of disposition of a substance. These models contain "effective" parameters which represent the entire compartment. Distributed models consider the structural and morphological aspects of the tissue to determine the transport properties of that tissue. These distributed models describe both the temporal and spatial distribution of a substance in tissues. Each of these modeling techniques is described in detail with applications for cancer detection and treatment in mind.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
Binder model system to be used for determination of prepolymer functionality
NASA Technical Reports Server (NTRS)
Martinelli, F. J.; Hodgkin, J. H.
1971-01-01
Development of a method for determining the functionality distribution of prepolymers used for rocket binders is discussed. Research has been concerned with accurately determining the gel point of a model polyester system containing a single trifunctional crosslinker, and the application of these methods to more complicated model systems containing a second trifunctional crosslinker, monofunctional ingredients, or a higher functionality crosslinker. Correlations of observed with theoretical gel points for these systems would allow the methods to be applied directly to prepolymers.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Computational analysis on plug-in hybrid electric motorcycle chassis
NASA Astrophysics Data System (ADS)
Teoh, S. J.; Bakar, R. A.; Gan, L. M.
2013-12-01
Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.
A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.
Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro
2017-08-01
This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Higher Moments of Net-Kaon Multiplicity Distributions at STAR
NASA Astrophysics Data System (ADS)
Xu, Ji;
2017-01-01
Fluctuations of conserved quantities such as baryon number (B), electric charge number (Q), and strangeness number (S), are sensitive to the correlation length and can be used to probe non-gaussian fluctuations near the critical point. Experimentally, higher moments of the multiplicity distributions have been used to search for the QCD critical point in heavy-ion collisions. In this paper, we report the efficiency-corrected cumulants and their ratios of mid-rapidity (|y| < 0.5) net-kaon multiplicity distributions in Au+Au collisions at = 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV collected in 2010, 2011, and 2014 with STAR at RHIC. The centrality and energy dependence of the cumulants and their ratios, are presented. Furthermore, the comparisons with baseline calculations (Poisson) and non-critical-point models (UrQMD) are also discussed.
Ji, Haoran; Wang, Chengshan; Li, Peng; ...
2017-09-20
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haoran; Wang, Chengshan; Li, Peng
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...
An asymptotic Reissner-Mindlin plate model
NASA Astrophysics Data System (ADS)
Licht, Christian; Weller, Thibaut
2018-06-01
A mathematical study via variational convergence of a periodic distribution of classical linearly elastic thin plates softly abutted together shows that it is not necessary to use a different continuum model nor to make constitutive symmetry hypothesis as starting points to deduce the Reissner-Mindlin plate model.
Direct statistical modeling and its implications for predictive mapping in mining exploration
NASA Astrophysics Data System (ADS)
Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila
2010-05-01
Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.
Modeling vibration response and damping of cables and cabled structures
NASA Astrophysics Data System (ADS)
Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.
2015-02-01
In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.
Barrett, Bruce; Brown, Roger; Mundt, Marlon
2008-02-01
Evaluative health-related quality-of-life instruments used in clinical trials should be able to detect small but important changes in health status. Several approaches to minimal important difference (MID) and responsiveness have been developed. To compare anchor-based and distributional approaches to important difference and responsiveness for the Wisconsin Upper Respiratory Symptom Survey (WURSS), an illness-specific quality of life outcomes instrument. Participants with community-acquired colds self-reported daily using the WURSS-44. Distribution-based methods calculated standardized effect size (ES) and standard error of measurement (SEM). Anchor-based methods compared daily interval changes to global ratings of change, using: (1) standard MID methods based on correspondence to ratings of "a little better" or "somewhat better," and (2) two-level multivariate regression models. About 150 adults were monitored throughout their colds (1,681 sick days.): 88% were white, 69% were women, and 50% had completed college. The mean age was 35.5 years (SD = 14.7). WURSS scores increased 2.2 points from the first to second day, and then dropped by an average of 8.2 points per day from days 2 to 7. The SEM averaged 9.1 during these 7 days. Standard methods yielded a between day MID of 22 points. Regression models of MID projected 11.3-point daily changes. Dividing these estimates of small-but-important-difference by pooled SDs yielded coefficients of .425 for standard MID, .218 for regression model, .177 for SEM, and .157 for ES. These imply per-group sample sizes of 870 using ES, 616 for SEM, 302 for regression model, and 89 for standard MID, assuming alpha = .05, beta = .20 (80% power), and two-tailed testing. Distribution and anchor-based approaches provide somewhat different estimates of small but important difference, which in turn can have substantial impact on trial design.
SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL
2014-06-01
Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-08-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for aquifer thermal energy storage (ATES) systems and wells. Recent model studies indicate that meter-scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In a study site in Bierbeek, Belgium, the influence of centimeter-scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3-3.6 %) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6-10.2 %) on the energy output of the ATES system. It is concluded that it is important to incorporate small-scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-04-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for Aquifer Thermal Energy Storage (ATES) systems and wells. Recent model studies indicate that meter scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In this paper, the influence of centimeter scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3 - 3.6%) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6 - 10.2%) on the energy output of the ATES system. It is concluded that it is important to incorporate small scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Analytical model of a corona discharge from a conical electrode under saturation
NASA Astrophysics Data System (ADS)
Boltachev, G. Sh.; Zubarev, N. M.
2012-11-01
Exact partial solutions are found for the electric field distribution in the outer region of a stationary unipolar corona discharge from an ideal conical needle in the space-charge-limited current mode with allowance for the electric field dependence of the ion mobility. It is assumed that only the very tip of the cone is responsible for the discharge, i.e., that the ionization zone is a point. The solutions are obtained by joining the spherically symmetric potential distribution in the drift space and the self-similar potential distribution in the space-charge-free region. Such solutions are outside the framework of the conventional Deutsch approximation, according to which the space charge insignificantly influences the shape of equipotential surfaces and electric lines of force. The dependence is derived of the corona discharge saturation current on the apex angle of the conical electrode and applied potential difference. A simple analytical model is suggested that describes drift in the point-plane electrode geometry under saturation as a superposition of two exact solutions for the field potential. In terms of this model, the angular distribution of the current density over the massive plane electrode is derived, which agrees well with Warburg's empirical law.
Theory and Test of Stress Resistance
1992-04-01
example, the word death has its effect on the ink colour blue and not green. The main assumption made by this model is that the two effects can be...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (Max:mum 200 words) In this report, we developed a laboratory model to test...this point, been acknowledged. The research also points to two (Continued) 14. SUBJECT TERMS 15. NUMBER OF PAGES Stress resistance Attention 48
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
NASA Astrophysics Data System (ADS)
Pankratov, E. L.
2018-05-01
We introduce a model of redistribution of point radiation defects, their interaction between themselves and redistribution of their simplest complexes (divacancies and diinterstitials) in a multilayer structure. The model gives a possibility to describe qualitatively nonmonotonicity of distributions of concentrations of radiation defects on interfaces between layers of the multilayer structure. The nonmonotonicity was recently found experimentally. To take into account the nonmonotonicity we modify recently used in literature model for analysis of distribution of concentration of radiation defects. To analyze the model we used an approach of solution of boundary problems, which could be used without crosslinking of solutions on interfaces between layers of the considered multilayer structures.
Regression analysis using dependent Polya trees.
Schörgendorfer, Angela; Branscum, Adam J
2013-11-30
Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.
Optimizing the Distribution of Tie Points for the Bundle Adjustment of HRSC Image Mosaics
NASA Astrophysics Data System (ADS)
Bostelmann, J.; Breitkopf, U.; Heipke, C.
2017-07-01
For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.
NASA Astrophysics Data System (ADS)
Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar
2017-06-01
In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.
NASA Astrophysics Data System (ADS)
Cao, M.-H.; Jiang, H.-K.; Chin, J.-S.
1982-04-01
An improved flat-fan spray model is used for the semi-empirical analysis of liquid fuel distribution downstream of a plain orifice injector under cross-stream air flow. The model assumes that, due to the aerodynamic force of the high-velocity cross air flow, the injected fuel immediately forms a flat-fan liquid sheet perpendicular to the cross flow. Once the droplets have been formed, the trajectories of individual droplets determine fuel distribution downstream. Comparison with test data shows that the proposed model accurately predicts liquid fuel distribution at any point downstream of a plain orifice injector under high-velocity, low-temperature uniform cross-stream air flow over a wide range of conditions.
NASA Astrophysics Data System (ADS)
Wang, Jingmei; Gong, Adu; Li, Jing; Chen, Yanling
2017-04-01
Typhoon is a kind of strong weather system formed in tropical or subtropical oceans. China, located on the west side of the Pacific Ocean, is the country affected by the typhoon most frequently and seriously. To provide theoretical support for effectively reducing the damage caused by typhoon, the variation law of typhoon frequency is explored by analyzing the distribution of typhoon path and landing sites, sphere of influence, and the statistical characteristics of typhoon for every 5 years. In this study, the typhoon point data set was formed using the Best Path Data Set (0.1 ° × 0.1 °) compiled by China Meteorological Administration from 1950 to 2014. By using the tool of Point to Line in software ArgGIS, the typhoon paths are produced from the point data set. The influence sphere of typhoon is calculated from Euclidean distance of typhoon, whose threshold is set to 1°.The typhoon landing site was extracted by using the Chinese vector layer provided by the research group. By counting the frequency of typhoons, the landing sites, and the sphere of influence, some conclusions can be drawn as follows. In recent years, the number of typhoons generated has been reduced, typhoon intensity is relatively stable, but the impact of typhoon area has increased. Specific performance can be seen from the typhoon statistical and spatial distribution characteristics in China. In terms of frequency of typhoon landing, the number of typhoons landing in China has increased while the total number of typhoons is reduced. In terms of distribution of landing sites, the range of typhoon landing fluctuates. However, during the process of fluctuation, the range is gradually expanding. For example, in south of China, Hainan Island is affected by typhoon more frequently meanwhile China's northeast region is also gradually affected, which is extremely unusual before. Key words: spatial point model, distribution of typhoon, frequency of typhoon
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
Models for the hotspot distribution
NASA Technical Reports Server (NTRS)
Jurdy, Donna M.; Stefanick, Michael
1990-01-01
Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Evaluation of a multi-point method for determining acoustic impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Parrott, Tony L.
1988-01-01
An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.
Atmospheric Teleconnections From Cumulants
NASA Astrophysics Data System (ADS)
Sabou, F.; Kaspi, Y.; Marston, B.; Schneider, T.
2011-12-01
Multi-point cumulants of fields such as vorticity provide a way to visualize atmospheric teleconnections, complementing other approaches such as the method of empirical orthogonal functions (EOFs). We calculate equal-time two-point cumulants of the vorticity from NCEP reanalysis data during the period 1980 -- 2010 and from direct numerical simulation (DNS) using an idealized dry general circulation model (GCM) (Schneider and Walker, 2006). Extratropical correlations seen in the NCEP data are qualitatively reproduced by the model. Three- and four-point cumulants accumulated from DNS quantify departures of the probability distribution function from a normal distribution, shedding light on the efficacy of direct statistical simulation (DSS) of atmosphere dynamics by cumulant expansions (Marston, Conover, and Schneider, 2008; Marston 2011). Lagged-time two-point cumulants between temperature gradients and eddy kinetic energy (EKE), accumulated by DNS of an idealized moist aquaplanet GCM (O'Gorman and Schneider, 2008), reveal dynamics of storm tracks. Regions of enhanced baroclinicity (as found along the eastern boundary of continents) lead to a local enhancement of EKE and a suppression of EKE further downstream as the storm track self-destructs (Kaspi and Schneider, 2011).
A Gibbs point field model for the spatial pattern of coronary capillaries
NASA Astrophysics Data System (ADS)
Karch, R.; Neumann, M.; Neumann, F.; Ullrich, R.; Neumüller, J.; Schreiner, W.
2006-09-01
We propose a Gibbs point field model for the pattern of coronary capillaries in transverse histologic sections from human hearts, based on the physiology of oxygen supply from capillaries to tissue. To specify the potential energy function of the Gibbs point field, we draw on an analogy between the equation of steady-state oxygen diffusion from an array of parallel capillaries to the surrounding tissue and Poisson's equation for the electrostatic potential of a two-dimensional distribution of identical point charges. The influence of factors other than diffusion is treated as a thermal disturbance. On this basis, we arrive at the well-known two-dimensional one-component plasma, a system of identical point charges exhibiting a weak (logarithmic) repulsive interaction that is completely characterized by a single dimensionless parameter. By variation of this parameter, the model is able to reproduce many characteristics of real capillary patterns.
ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION
SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.
2015-01-01
Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814
NASA Astrophysics Data System (ADS)
Azezan, Nur Arif; Ramli, Mohammad Fadzli; Masran, Hafiz
2017-11-01
In this paper, we discussed a literature on blood collection-distribution that based on vehicle routing problem. This problem emergence when the process from collection to stock up must be completed in timely manner. We also modified the mathematical model so that it will suited to general collection of blood. A discussion on its algorithm and solution methods are also pointed out briefly in this paper.
2017-09-01
AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Test and...ambiguities and identify high -value decision points? This thesis explores how formalization of these experience-based decisions as a process model...representing a T&E event may reveal high -value decision nodes where certain decisions carry more weight or potential for impacts to a successful test. The
Orthodontic intrusion of maxillary incisors: a 3D finite element method study
Saga, Armando Yukio; Maruo, Hiroshi; Argenta, Marco André; Maruo, Ivan Toshio; Tanaka, Orlando Motohiro
2016-01-01
Objective: In orthodontic treatment, intrusion movement of maxillary incisors is often necessary. Therefore, the objective of this investigation is to evaluate the initial distribution patterns and magnitude of compressive stress in the periodontal ligament (PDL) in a simulation of orthodontic intrusion of maxillary incisors, considering the points of force application. Methods: Anatomic 3D models reconstructed from cone-beam computed tomography scans were used to simulate maxillary incisors intrusion loading. The points of force application selected were: centered between central incisors brackets (LOAD 1); bilaterally between the brackets of central and lateral incisors (LOAD 2); bilaterally distal to the brackets of lateral incisors (LOAD 3); bilaterally 7 mm distal to the center of brackets of lateral incisors (LOAD 4). Results and Conclusions: Stress concentrated at the PDL apex region, irrespective of the point of orthodontic force application. The four load models showed distinct contour plots and compressive stress values over the midsagittal reference line. The contour plots of central and lateral incisors were not similar in the same load model. LOAD 3 resulted in more balanced compressive stress distribution. PMID:27007765
Description of waves in inhomogeneous domains using Heun's equation
NASA Astrophysics Data System (ADS)
Bednarik, M.; Cervenka, M.
2018-04-01
There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.
Human variability in mercury toxicokinetics and steady state biomarker ratios.
Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M
2000-10-01
Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass
NASA Astrophysics Data System (ADS)
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-01
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-26
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
NASA Technical Reports Server (NTRS)
VandeVen, C.; Weiss, S. B.
2001-01-01
Our challenge is to model plant species distributions in complex montane environments using disparate sources of data, including topography, geology, and hyperspectral data. From an ecologist's point of view, species distributions are determined by local environment and disturbance history, while spectral data are 'ancillary.' However, a remote sensor's perspective says that spectral data provide picture of what vegetation is there, topographic and geologic data are ancillary. In order to bridge the gap, all available data should be used to get the best possible prediction of species distributions using complex multivariate techniques implemented on a GIS. Vegetation reflects local climatic and nutrient conditions, both of which can be modeled, allowing predictive mapping of vegetation distributions. Geologic substrate strongly affects chemical, thermal, and physical properties of soils, while climatic conditions are determined by local topography. As elevation increases, precipitation increases and temperature decreases. Aspect, slope, and surrounding topography determine potential insolation, so that south-facing slopes are warmer and north-facing slopes cooler at a given elevation. Topographic position (ridge, slope, canyon, or meadow) and slope angle affect sediment accumulation and soil depth. These factors combine as complex environmental gradients, and underlie many features of plant distributions. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data, digital elevation models, digitized geologic maps, and 378 ground control points were used to predictively map species distributions in the central and southern White Mountains, along the western boundary of the Basin and Range province. Minimum Noise Fraction (MNF) bands were calculated from the visible and near-infrared AVIRIS bands, and combined with digitized geologic maps and topographic variables using Canonical Correspondence Analysis (CCA). CCA allows for modeling species 'envelopes' in multidimensional environmental space, which can then be projected across entire landscapes.
Dynamics of Nearest-Neighbour Competitions on Graphs
NASA Astrophysics Data System (ADS)
Rador, Tonguç
2017-10-01
Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
NASA Astrophysics Data System (ADS)
Adam, Khaled F.; Long, Zhengdong; Field, David P.
2017-04-01
In 7xxx series aluminum alloys, the constituent large and small second-phase particles present during deformation process. The fraction and spatial distribution of these second-phase particles significantly influence the recrystallized structure, kinetics, and texture in the subsequent treatment. In the present work, the Monte Carlo Potts model was used to model particle-stimulated nucleation (PSN)-dominated recrystallization and grain growth in high-strength aluminum alloy 7050. The driving force for recrystallization is deformation-induced stored energy, which is also strongly affected by the coarse particle distribution. The actual microstructure and particle distribution of hot-rolled plate were used as an initial point for modeling of recrystallization during the subsequent solution heat treatment. Measurements from bright-field TEM images were performed to enhance qualitative interpretations of the developed microstructure. The influence of texture inhomogeneity has been demonstrated from a theoretical point of view using pole figures. Additionally, in situ annealing measurements in SEM were performed to track the orientational and microstructural changes and to provide experimental support for the recrystallization mechanism of PSN in AA7050.
Experimental design for dynamics identification of cellular processes.
Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T
2014-03-01
We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Sodium Atoms in the Lunar Exotail: Observed Velocity and Spatial Distributions
NASA Technical Reports Server (NTRS)
Line, Michael R.; Mierkiewicz, E. J.; Oliversen, R. J.; Wilson, J. K.; Haffner, L. M.; Roesler, F. L.
2011-01-01
The lunar sodium tail extends long distances due to radiation pressure on sodium atoms in the lunar exosphere. Our earlier observations determined the average radial velocity of sodium atoms moving down the lunar tail beyond Earth along the Sun-Moon-Earth line (i.e., the anti-lunar point) to be 12.4 km/s. Here we use the Wisconsin H-alpha Mapper to obtain the first kinematically resolved maps of the intensity and velocity distribution of this emission over a 15 x times 15 deg region on the sky near the anti-lunar point. We present both spatially and spectrally resolved observations obtained over four nights around new moon in October 2007. The spatial distribution of the sodium atoms is elongated along the ecliptic with the location of the peak intensity drifting 3 degrees east along the ecliptic per night. Preliminary modeling results suggest that the spatial and velocity distributions in the sodium exotail are sensitive to the near surface lunar sodium velocity distribution and that observations of this sort along with detailed modeling offer new opportunities to describe the time history of lunar surface sputtering over several days.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
Change point detection of the Persian Gulf sea surface temperature
NASA Astrophysics Data System (ADS)
Shirvani, A.
2017-01-01
In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
Effect of electromagnetic field on Kordylewski clouds formation
NASA Astrophysics Data System (ADS)
Salnikova, Tatiana; Stepanov, Sergey
2018-05-01
In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.
NASA Astrophysics Data System (ADS)
Ertaş, Mehmet; Keskin, Mustafa
2015-03-01
By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.
Spatial Modeling for Resources Framework (SMRF)
USDA-ARS?s Scientific Manuscript database
Spatial Modeling for Resources Framework (SMRF) was developed by Dr. Scott Havens at the USDA Agricultural Research Service (ARS) in Boise, ID. SMRF was designed to increase the flexibility of taking measured weather data and distributing the point measurements across a watershed. SMRF was developed...
Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin; Krueger, Ronald
2001-01-01
Finite element (FE) analysis was performed on 3-point and 4-point bending test configurations of ninety degree oriented glass-epoxy and graphite-epoxy composite beams to identify deviations from beam theory predictions. Both linear and geometric non-linear analyses were performed using the ABAQUS finite element code. The 3-point and 4-point bending specimens were first modeled with two-dimensional elements. Three-dimensional finite element models were then performed for selected 4-point bending configurations to study the stress distribution across the width of the specimens and compare the results to the stresses computed from two-dimensional plane strain and plane stress analyses and the stresses from beam theory. Stresses for all configurations were analyzed at load levels corresponding to the measured transverse tensile strength of the material.
Coulomb Mechanics And Landscape Geometry Explain Landslide Size Distribution
NASA Astrophysics Data System (ADS)
Jeandet, L.; Steer, P.; Lague, D.; Davy, P.
2017-12-01
It is generally observed that the dimensions of large bedrock landslides follow power-law scaling relationships. In particular, the non-cumulative frequency distribution (PDF) of bedrock landslide area is well characterized by a negative power-law above a critical size, with an exponent 2.4. However, the respective role of bedrock mechanical properties, landscape shape and triggering mechanisms on the scaling properties of landslide dimensions are still poorly understood. Yet, unravelling the factors that control this distribution is required to better estimate the total volume of landslides triggered by large earthquakes or storms. To tackle this issue, we develop a simple probabilistic 1D approach to compute the PDF of rupture depths in a given landscape. The model is applied to randomly sampled points along hillslopes of studied digital elevation models. At each point location, the model determines the range of depth and angle leading to unstable rupture planes, by applying a simple Mohr-Coulomb rupture criterion only to the rupture planes that intersect downhill surface topography. This model therefore accounts for both rock mechanical properties, friction and cohesion, and landscape shape. We show that this model leads to realistic landslide depth distribution, with a power-law arising when the number of samples is high enough. The modeled PDF of landslide size obtained for several landscapes match the ones from earthquakes-driven landslides catalogues for the same landscape. In turn, this allows us to invert landslide effective mechanical parameters, friction and cohesion, associated to those specific events, including Chi-Chi, Wenchuan, Niigata and Gorkha earthquakes. The cohesion and friction ranges (25-35 degrees and 5-20 kPa) are in good agreement with previously inverted values. Our results demonstrate that reduced complexity mechanics is efficient to model the distribution of unstable depths, and show the role of landscape variability in landslide size distribution.
Long, Jean-Alexandre; Daanen, Vincent; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc
2007-11-01
The objective of this study was to determine the added value of real-time three-dimensional (4D) ultrasound guidance of prostatic biopsies on a prostate phantom in terms of the precision of guidance and distribution. A prostate phantom was constructed. A real-time 3D ultrasonograph connected to a transrectal 5.9 MHz volumic transducer was used. Fourteen operators performed 336 biopsies with 2D guidance then 4D guidance according to a 12-biopsy protocol. Biopsy tracts were modelled by segmentation in a 3D ultrasound volume. Specific software allowed visualization of biopsy tracts in the reference prostate and evaluated the zone biopsied. A comparative study was performed to determine the added value of 4D guidance compared to 2D guidance by evaluating the precision of entry points and target points. The distribution was evaluated by measuring the volume investigated and by a redundancy ratio of the biopsy points. The precision of the biopsy protocol was significantly improved by 4D guidance (p = 0.037). No increase of the biopsy volume and no improvement of the distribution of biopsies were observed with 4D compared to 2D guidance. The real-time 3D ultrasound-guided prostate biopsy technique on a phantom model appears to improve the precision and reproducibility of a biopsy protocol, but the distribution of biopsies does not appear to be improved.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
2010-01-01
Needle exchange programs chase political as well as epidemiological dragons, carrying within them both implicit moral and political goals. In the exchange model of syringe distribution, injection drug users (IDUs) must provide used needles in order to receive new needles. Distribution and retrieval are co-existent in the exchange model. Likewise, limitations on how many needles can be received at a time compel addicts to have multiple points of contact with professionals where the virtues of treatment and detox are impressed upon them. The centre of gravity for syringe distribution programs needs to shift from needle exchange to needle distribution, which provides unlimited access to syringes. This paper provides a case study of the Washington Needle Depot, a program operating under the syringe distribution model, showing that the distribution and retrieval of syringes can be separated with effective results. Further, the experience of IDUs is utilized, through paid employment, to provide a vulnerable population of people with clean syringes to prevent HIV and HCV. PMID:20047690
Learning stochastic reward distributions in a speeded pointing task.
Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C
2008-04-23
Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.
Design and modelling of a link monitoring mechanism for the Common Data Link (CDL)
NASA Astrophysics Data System (ADS)
Eichelberger, John W., III
1994-09-01
The Common Data Link (CDL) is a full duplex, point-to-point microwave communications system used in imagery and signals intelligence collection systems. It provides a link between two remote Local Area Networks (LAN's) aboard collection and surface platforms. In a hostile environment, there is an overwhelming need to dynamically monitor the link and thus, limit the impact of jamming. This work describes steps taken to design, model, and evaluate a link monitoring system suitable for the CDL. The monitoring system is based on features and monitoring constructs of the Link Control Protocol (LCP) in the Point-to-Point Protocol (PPP) suite. The CDL model is based on a system of two remote Fiber Distributed Data Interface (FDDI) LAN's. In particular, the policies and mechanisms associated with monitoring are described in detail. An implementation of the required mechanisms using the OPNET network engineering tool is described. Performance data related to monitoring parameters is reported. Finally, integration of the FDDI-CDL model with the OPNET Internet model is described.
Dearden, John C
2003-08-01
Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.
NASA Astrophysics Data System (ADS)
Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.
2013-01-01
Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.
NASA Astrophysics Data System (ADS)
Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre
2006-12-01
The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.
NASA Astrophysics Data System (ADS)
Rehmer, Donald E.
Analysis of results from a mathematical programming model were examined to 1) determine the least cost options for infrastructure development of geologic storage of CO2 in the Illinois Basin, and 2) perform an analysis of a number of CO2 emission tax and oil price scenarios in order to implement development of the least-cost pipeline networks for distribution of CO2. The model, using mixed integer programming, tested the hypothesis of whether viable EOR sequestration sites can serve as nodal points or hubs to expand the CO2 delivery infrastructure to more distal locations from the emissions sources. This is in contrast to previous model results based on a point-to- point model having direct pipeline segments from each CO2 capture site to each storage sink. There is literature on the spoke and hub problem that relates to airline scheduling as well as maritime shipping. A large-scale ship assignment problem that utilized integer linear programming was run on Excel Solver and described by Mourao et al., (2001). Other literature indicates that aircraft assignment in spoke and hub routes can also be achieved using integer linear programming (Daskin and Panayotopoulos, 1989; Hane et al., 1995). The distribution concept is basically the reverse of the "tree and branch" type (Rothfarb et al., 1970) gathering systems for oil and natural gas that industry has been developing for decades. Model results indicate that the inclusion of hubs as variables in the model yields lower transportation costs for geologic carbon dioxide storage over previous models of point-to-point infrastructure geometries. Tabular results and GIS maps of the selected scenarios illustrate that EOR sites can serve as nodal points or hubs for distribution of CO2 to distal oil field locations as well as deeper saline reservoirs. Revenue amounts and capture percentages both show an improvement over solutions when the hubs are not allowed to come into the solution. Other results indicate that geologic storage of CO2 into saline aquifers does not come into solutions selected by the model until the CO 2 emissions tax approaches 50/tonne. CO2 capture and storage begins to occur when the oil price is above 24.42 a barrel based on the constraints of the model. The annual storage capacity of the basin is nearly maximized when the net price of oil is as low as 40 per barrel and the CO2 emission tax is 60/tonne. The results from every subsequent scenario that was examined by this study demonstrate that EOR utilizing anthropogenically captured CO2 will earn net revenue, and thus represents an economically viable option for CO2 storage in the Illinois Basin.
Modeling of projection electron lithography
NASA Astrophysics Data System (ADS)
Mack, Chris A.
2000-07-01
Projection Electron Lithography (PEL) has recently become a leading candidate for the next generation of lithography systems after the successful demonstration of SCAPEL by Lucent Technologies and PREVAIL by IBM. These systems use a scattering membrane mask followed by a lens with limited angular acceptance range to form an image of the mask when illuminated by high energy electrons. This paper presents an initial modeling system for such types of projection electron lithography systems. Monte Carlo modeling of electron scattering within the mask structure creates an effective mask 'diffraction' pattern, to borrow the standard optical terminology. A cutoff of this scattered pattern by the imaging 'lens' provides an electron energy distribution striking the wafer. This distribution is then convolved with a 'point spread function,' the results of a Monte Carlo scattering calculation of a point beam of electrons striking the resist coated substrate and including the effects of beam blur. Resist exposure and development models from standard electron beam lithography simulation are used to simulate the final three-dimensional resist profile.
Steady state numerical solutions for determining the location of MEMS on projectile
NASA Astrophysics Data System (ADS)
Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.
2018-03-01
This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.
Production of black holes and their angular momentum distribution in models with split fermions
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Starkman, Glenn D.; Stojkovic, Dejan
2006-05-01
In models with TeV-scale gravity it is expected that mini black holes will be produced in near-future accelerators. On the other hand, TeV-scale gravity is plagued with many problems like fast proton decay, unacceptably large n-n¯ oscillations, flavor changing neutral currents, large mixing between leptons, etc. Most of these problems can be solved if different fermions are localized at different points in the extra dimensions. We study the cross section for the production of black holes and their angular momentum distribution in these models with “split” fermions. We find that, for a fixed value of the fundamental mass scale, the total production cross section is reduced compared with models where all the fermions are localized at the same point in the extra dimensions. Fermion splitting also implies that the bulk component of the black hole angular momentum must be taken into account in studies of the black hole decay via Hawking radiation.
Applications in bridge structure health monitoring using distributed fiber sensing
NASA Astrophysics Data System (ADS)
Feng, Yafei; Zheng, Huan; Ge, Huiliang
2017-10-01
In this paper, Brillouin Optical Time Domain Analysis (BOTDA) is proposed to solve the problem that the traditional point sensor is difficult to realize the comprehensive safety monitoring of bridges and so on. This technology not only breaks through the bottleneck of traditional monitoring point sensor, realize the distributed measurement of temperature and strain on a transmission path; can also be used for bridge and other structures of the damage identification, fracture positioning, settlement monitoring. The effectiveness and frontier of the technology are proved by comparing the test of the indoor model beam and the external field bridge, and the significance of the distributed optical fiber sensing technology to the monitoring of the important structure of the bridge is fully explained.
Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells
Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun; ...
2017-12-15
To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less
Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun
To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less
Mendoza, C.; Hartzell, S.H.
1988-01-01
We have inverted the teleseismic P waveforms recorded by stations of the Global Digital Seismograph Network for the 8 July 1986 North Palm Springs, California, the 28 October 1983 Borah Peak, Idaho, and the 19 September 1985 Michoacan, Mexico, earthquakes to recover the distribution of slip on each of the faults using a point-by-point inversion method with smoothing and positivity constraints. Results of the inversion indicate that the Global digital Seismograph Network data are useful for deriving fault dislocation models for moderate to large events. However, a wide range of frequencies is necessary to infer the distribution of slip on the earthquake fault. Although the long-period waveforms define the size (dimensions and seismic moment) of the earthquake, data at shorter period provide additional constraints on the variation of slip on the fault. Dislocation models obtained for all three earthquakes are consistent with a heterogeneous rupture process where failure is controlled largely by the size and location of high-strength asperity regions. -from Authors
Rijal, Omar M; Abdullah, Norli A; Isa, Zakiah M; Noor, Norliza M; Tawfiq, Omar F
2013-01-01
The knowledge of teeth positions on the maxillary arch is useful in the rehabilitation of the edentulous patient. A combination of angular (θ), and linear (l) variables representing position of four teeth were initially proposed as the shape descriptor of the maxillary dental arch. Three categories of shape were established, each having a multivariate normal distribution. It may be argued that 4 selected teeth on the standardized digital images of the dental casts could be considered as insufficient with respect to representing shape. However, increasing the number of points would create problems with dimensions and proof of existence of the multivariate normal distribution is extremely difficult. This study investigates the ability of Fourier descriptors (FD) using all maxillary teeth to find alternative shape models. Eight FD terms were sufficient to represent 21 points on the arch. Using these 8 FD terms as an alternative shape descriptor, three categories of shape were verified, each category having the complex normal distribution.
NASA Astrophysics Data System (ADS)
Liu, Ke; Wang, Chang; Liu, Guo-liang; Ding, Ning; Sun, Qi-song; Tian, Zhi-hong
2017-04-01
To investigate the formation of one kind of typical inter-dendritic crack around triple point region in continuous casting(CC) slab during the operation of soft reduction, fully coupled 3D thermo-mechanical finite element models was developed, also plant trials were carried out in a domestic continuous casting machine. Three possible types of soft reduction amount distribution (SRAD) in the soft reduction region were analyzed. The relationship between the typical inter-dendritic cracks and soft reduction conditions is presented and demonstrated in production practice. Considering the critical strain of internal crack formation, a critical tolerance for the soft reduction amount distribution and related casing parameters have been proposed for better contribution of soft reduction to the internal quality of slabs. The typical inter-dendritic crack around the triple point region had been eliminated effectively through the application of proposed suggestions for continuous casting of X70 pipeline steel in industrial practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zehua; Tang, Xian-Zhu; McDevitt, Christopher J.
Generation of runaway electrons (RE) beams can possibly induce the most deleterious effect of tokamak disruptions. A number of recent numerical calculations have confirmed the formation of a RE bump in their energy distribution by taking into account Synchrontron radiational damping force due to RE’s gyromotions. Here, we present a detailed examination on how the bump location changes at different pitch-angle and the characteristics of the RE pitch-angle distribution. Although REs moving along the magnetic field are preferably accelerated and then populate the phase-space of larger pitch-angle mainly through diffusions, an off-axis peak can still form due to the presencemore » of the vortex structure which causes accumulation of REs at low pitch-angle. A simplified Fokker- Planck model and its semi-analytical solutions based on local expansions around the O point is used to illustrate the characteristics of RE distribution around the O point of the runaway vortex in phase-space. The calculated energy location of the O point together with the local energy and pitch-angle distributions agree with the full numerical solution.« less
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1993-01-01
Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.
Phase-plane analysis to an “anisotropic” higher-order traffic flow model
NASA Astrophysics Data System (ADS)
Wu, Chun-Xiu
2018-04-01
The qualitative theory of differential equations is applied to investigate the traveling wave solution to an “anisotropic” higher-order viscous traffic flow model under the Lagrange coordinate system. The types and stabilities of the equilibrium points are discussed in the phase plane. Through the numerical simulation, the overall distribution structures of trajectories are drawn to analyze the relation between the phase diagram and the selected conservative solution variables, and the influences of the parameters on the system are studied. The limit-circle, limit circle-spiral point, saddle-spiral point and saddle-nodal point solutions are obtained. These steady-state solutions provide good explanation for the phenomena of the oscillatory and homogeneous congestions in real-world traffic.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
NASA Technical Reports Server (NTRS)
Wrotniak, J. A.; Yodh, G. B.
1985-01-01
The x-y controversy is studied by introducing models with as many features (except for x and y distributions) in common, as possible, to avoid an extrapolation problem, only primary energies of 500 TeV are considered. To prove the point, Monte Carlo simulations are performed of EAS generated by 500 TeV vertical primary protons. Four different nuclear interaction models were used. Two of them are described elsewhere. Two are: (1) Model M-Y00 - with inclusive x and y distributions behaving in a scaling way; and (2) Model M-F00 - at and below ISR energies (1 TeV in Lab) exactly equivalent to the above, then gradually changing to provide the distributions in rapidity at 155 TeV as given by SPS proton-antiproton. This was achieved by gradual decrease in the scale unit in x distributions of produced secondaries, as interaction energy increases. Other modifications to the M-Y00 model were made.
Phase transition and information cascade in a voting model
NASA Astrophysics Data System (ADS)
Hisakado, M.; Mori, S.
2010-08-01
In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters—copycat and independent—and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
Dense Nonaqueous Phase Liquids
This issue paper is a literature evaluation focusing on DNAPLs and provides an overview from a conceptual fate and transport point of view of DNAPL phase distribution, monitoring, site characterization, remediation, and modeling.
NASA Astrophysics Data System (ADS)
Ulfah, S.; Awalludin, S. A.; Wahidin
2018-01-01
Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.
Multiple Weyl points and the sign change of their topological charges in woodpile photonic crystals
NASA Astrophysics Data System (ADS)
Chang, Ming-Li; Xiao, Meng; Chen, Wen-Jie; Chan, C. T.
2017-03-01
We show that Weyl points with topological charges 1 and 2 can be found in very simple chiral woodpile photonic crystals and the distribution of the charges can be changed by changing the material parameters without altering space-group symmetry. The underlying physics can be understood through a tight-binding model. Gapless surface states and their backscattering immune properties also are demonstrated in these systems. Obtaining Weyl points in these easily fabricated woodpile photonic crystals will facilitate the realization of Weyl point physics in optical and IR frequencies.
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, M.; Jayko, K.; Bowles, A.
1986-10-01
A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Searching for minimum in dependence of squared speed-of-sound on collision energy
Liu, Fu -Hu; Gao, Li -Na; Lacey, Roy A.
2016-01-01
Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less
Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models
Lappi, Otto; Pekkanen, Jami; Itkonen, Teemu H.
2013-01-01
For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex. PMID:23894300
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1975-01-01
An opacity sampling (OS) technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is presented. Tests were conducted and results show that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 500 frequency points is often adequate. The effects of atomic and molecular lines are separately studied. A test model computed by using the OS method agrees very well with a model having identical atmospheric parameters computed by the giant line (opacity distribution function) method.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
Mao, Zhun; Saint-André, Laurent; Bourrier, Franck; Stokes, Alexia; Cordonnier, Thomas
2015-01-01
Background and Aims In mountain ecosystems, predicting root density in three dimensions (3-D) is highly challenging due to the spatial heterogeneity of forest communities. This study presents a simple and semi-mechanistic model, named ChaMRoots, that predicts root interception density (RID, number of roots m–2). ChaMRoots hypothesizes that RID at a given point is affected by the presence of roots from surrounding trees forming a polygon shape. Methods The model comprises three sub-models for predicting: (1) the spatial heterogeneity – RID of the finest roots in the top soil layer as a function of tree basal area at breast height, and the distance between the tree and a given point; (2) the diameter spectrum – the distribution of RID as a function of root diameter up to 50 mm thick; and (3) the vertical profile – the distribution of RID as a function of soil depth. The RID data used for fitting in the model were measured in two uneven-aged mountain forest ecosystems in the French Alps. These sites differ in tree density and species composition. Key Results In general, the validation of each sub-model indicated that all sub-models of ChaMRoots had good fits. The model achieved a highly satisfactory compromise between the number of aerial input parameters and the fit to the observed data. Conclusions The semi-mechanistic ChaMRoots model focuses on the spatial distribution of root density at the tree cluster scale, in contrast to the majority of published root models, which function at the level of the individual. Based on easy-to-measure characteristics, simple forest inventory protocols and three sub-models, it achieves a good compromise between the complexity of the case study area and that of the global model structure. ChaMRoots can be easily coupled with spatially explicit individual-based forest dynamics models and thus provides a highly transferable approach for modelling 3-D root spatial distribution in complex forest ecosystems. PMID:26173892
Aryal, Madhava P; Nagaraja, Tavarekere N; Brown, Stephen L; Lu, Mei; Bagher-Ebadian, Hassan; Ding, Guangliang; Panda, Swayamprava; Keenan, Kelly; Cabral, Glauber; Mikkelsen, Tom; Ewing, James R
2014-10-01
The distribution of dynamic contrast-enhanced MRI (DCE-MRI) parametric estimates in a rat U251 glioma model was analyzed. Using Magnevist as contrast agent (CA), 17 nude rats implanted with U251 cerebral glioma were studied by DCE-MRI twice in a 24 h interval. A data-driven analysis selected one of three models to estimate either (1) plasma volume (vp), (2) vp and forward volume transfer constant (K(trans)) or (3) vp, K(trans) and interstitial volume fraction (ve), constituting Models 1, 2 and 3, respectively. CA distribution volume (VD) was estimated in Model 3 regions by Logan plots. Regions of interest (ROIs) were selected by model. In the Model 3 ROI, descriptors of parameter distributions--mean, median, variance and skewness--were calculated and compared between the two time points for repeatability. All distributions of parametric estimates in Model 3 ROIs were positively skewed. Test-retest differences between population summaries for any parameter were not significant (p ≥ 0.10; Wilcoxon signed-rank and paired t tests). These and similar measures of parametric distribution and test-retest variance from other tumor models can be used to inform the choice of biomarkers that best summarize tumor status and treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.
Wang, Shuang; Jiang, Xiaoqian; Wu, Yuan; Cui, Lijuan; Cheng, Samuel; Ohno-Machado, Lucila
2013-01-01
We developed an EXpectation Propagation LOgistic REgRession (EXPLORER) model for distributed privacy-preserving online learning. The proposed framework provides a high level guarantee for protecting sensitive information, since the information exchanged between the server and the client is the encrypted posterior distribution of coefficients. Through experimental results, EXPLORER shows the same performance (e.g., discrimination, calibration, feature selection etc.) as the traditional frequentist Logistic Regression model, but provides more flexibility in model updating. That is, EXPLORER can be updated one point at a time rather than having to retrain the entire data set when new observations are recorded. The proposed EXPLORER supports asynchronized communication, which relieves the participants from coordinating with one another, and prevents service breakdown from the absence of participants or interrupted communications. PMID:23562651
Brewer, Shannon K.; Worthington, Thomas A.; Zhang, Tianjioa; Logue, Daniel R.; Mittelstet, Aaron R.
2016-01-01
Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the conservation status of pelagophils.
A Planar Quasi-Static Constraint Mode Tire Model
2015-07-10
strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Developement of watershed and reference loads for a TMDL in Charleston Harbor System, SC.
Silong Lu; Devenra Amatya; Jamie Miller
2005-01-01
It is essential to determine point and non-point source loads and their distribution for development of a dissolved oxygen (DO) Total Maximum Daily Load (TMDL). A series of models were developed to assess sources of oxygen-demand loadings in Charleston Harbor, South Carolina. These oxygen-demand loadings included nutrients and BOD. Stream flow and nutrient...
Nathaniel E. Seavy; Suhel Quader; John D. Alexander; C. John Ralph
2005-01-01
The success of avian monitoring programs to effectively guide management decisions requires that studies be efficiently designed and data be properly analyzed. A complicating factor is that point count surveys often generate data with non-normal distributional properties. In this paper we review methods of dealing with deviations from normal assumptions, and we focus...
William J. Zielinski; Fredrick V. Schlexer; Jeffrey R. Dunk; Matthew J. Lau; James J. Graham
2015-01-01
The mountain beaver (Aplodontia rufa) is notably the most primitive North American rodent with a restricted distribution in the Pacific Northwest based on its physiological limits to heat stress and water needs. The Point Arena subspecies (A. r. nigra) is federally listed as endangered and is 1 of 2 subspecies that have extremely...
Consumers don’t play dice, influence of social networks and advertisements
NASA Astrophysics Data System (ADS)
Groot, Robert D.
2006-05-01
Empirical data of supermarket sales show stylised facts that are similar to stock markets, with a broad (truncated) Lévy distribution of weekly sales differences in the baseline sales [R.D. Groot, Physica A 353 (2005) 501]. To investigate the cause of this, the influence of social interactions and advertisements are studied in an agent-based model of consumers in a social network. The influence of network topology was varied by using a small-world network, a random network and a Barabási-Albert network. The degree to which consumers value the opinion of their peers was also varied. On a small-world and random network we find a phase transition between an open market and a locked-in market that is similar to condensation in liquids. At the critical point, fluctuations become large and buying behaviour is strongly correlated. However, on the small world network the noise distribution at the critical point is Gaussian, and critical slowing down occurs which is not observed in supermarket sales. On a scale-free network, the model shows a transition between a gas-like phase and a glassy state, but at the transition point the noise amplitude is much larger than what is seen in supermarket sales. To explore the role of advertisements, a model is studied where imprints are placed on the minds of consumers that ripen when a decision for a product is made. The correct distribution of weekly sales returns follows naturally from this model, as well as the noise amplitude, the correlation time and cross-correlation of sales fluctuations. For particular parameter values, simulated sales correlation shows power-law decay in time. The model predicts that social interaction helps to prevent aversion, and that products are viewed more positively when their consumption rate is higher.
Glassy Dynamics in the Adaptive Immune Response Prevents Autoimmune Disease
NASA Astrophysics Data System (ADS)
Sun, Jun; Deem, Michael
2006-03-01
The immune system normally protects the human host against death by infection. However, when an immune response is mistakenly directed at self antigens, autoimmune disease can occur. We describe a model of protein evolution to simulate the dynamics of the adaptive immune response to antigens. Computer simulations of the dynamics of antibody evolution show that different evolutionary mechanisms, namely gene segment swapping and point mutation, lead to different evolved antibody binding affinities. Although a combination of gene segment swapping and point mutation can yield a greater affinity to a specific antigen than point mutation alone, the antibodies so evolved are highly cross-reactive and would cause autoimmune disease, and this is not the chosen dynamics of the immune system. We suggest that in the immune system a balance has evolved between binding affinity and specificity in the mechanism for searching the amino acid sequence space of antibodies. Our model predicts that chronic infection may lead to autoimmune disease as well due to cross-reactivity and suggests a broad distribution for the time of onset of autoimmune disease due to chronic exposure. The slow search of antibody sequence space by point mutation leads to the broad of distribution times.
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
Plasma Model V&V of Collisionless Electrostatic Shock
NASA Astrophysics Data System (ADS)
Martin, Robert; Le, Hai; Bilyeu, David; Gildea, Stephen
2014-10-01
A simple 1D electrostatic collisionless shock was selected as an initial validation and verification test case for a new plasma modeling framework under development at the Air Force Research Laboratory's In-Space Propulsion branch (AFRL/RQRS). Cross verification between PIC, Vlasov, and Fluid plasma models within the framework along with expected theoretical results will be shown. The non-equilibrium velocity distributions (VDF) captured by PIC and Vlasov will be compared to each other and the assumed VDF of the fluid model at selected points. Validation against experimental data from the University of California, Los Angeles double-plasma device will also be presented along with current work in progress at AFRL/RQRS towards reproducing the experimental results using higher fidelity diagnostics to help elucidate differences between model results and between the models and original experiment. DISTRIBUTION A: Approved for public release; unlimited distribution; PA (Public Affairs) Clearance Number 14332.
Liu, Lianke; Ni, Fang; Zhang, Jianchao; Wang, Chunyu; Lu, Xiang; Guo, Zhirui; Yao, Shaowei; Shu, Yongqian; Xu, Ruizhi
2011-12-01
Hyperthermia incorporating magnetic nanoparticles (MNPs) is a hopeful therapy to cancers and steps into clinical tests at present. However, the clinical plan of MNPs deposition in tumors, especially applied for directly multipoint injection hyperthermia (DMIH), and the information of temperature rise in tumors by DMIH is lack of studied. In this paper, we mainly discussed thermal distributions induced by MNPs in the rat brain tumors during DMIH. Due to limited experimental measurement for detecting thermal dose of tumors, and in order to acquire optimized results of temperature distributions clinically needed, we designed the thermal model in which three types of MNPs injection for hyperthermia treatments were simulated. The simulated results showed that MNPs injection plan played an important role in determining thermal distribution, as well as the overall dose of MNPs injected. We found that as injected points enhanced, the difference of temperature in the whole tumor volume decreased. Moreover, from temperature detecting data by Fiber Optic Temperature Sensors (FOTSs) in glioma bearing rats during MNPs hyperthermia, we found the temperature errors by FOTSs reduced as the number of points injected enhanced. Finally, the results showed that the simulations are preferable and the optimized plans of the numbers and spatial positions of MNPs points injected are essential during direct injection hyperthermia.
NASA Astrophysics Data System (ADS)
Henriquez, Miguel F.; Thompson, Derek S.; Kenily, Shane; Khaziev, Rinat; Good, Timothy N.; McIlvain, Julianne; Siddiqui, M. Umair; Curreli, Davide; Scime, Earl E.
2016-10-01
Understanding particle distributions in plasma boundary regions is critical to predicting plasma-surface interactions. Ions in the presheath exhibit complex behavior because of collisions and due to the presence of boundary-localized electric fields. Complete understanding of particle dynamics is necessary for understanding the critical problems of tokamak wall loading and Hall thruster channel wall erosion. We report measurements of 3D argon ion velocity distribution functions (IVDFs) in the vicinity of an absorbing boundary oriented obliquely to a background magnetic field. Measurements were obtained via argon ion laser induced fluorescence throughout a spatial volume upstream of the boundary. These distribution functions reveal kinetic details that provide a point-to-point check on particle-in-cell and 1D3V Boltzmann simulations. We present the results of this comparison and discuss some implications for plasma boundary interaction physics.
Counts-in-cylinders in the Sloan Digital Sky Survey with Comparisons to N-body Simulations
NASA Astrophysics Data System (ADS)
Berrier, Heather D.; Barton, Elizabeth J.; Berrier, Joel C.; Bullock, James S.; Zentner, Andrew R.; Wechsler, Risa H.
2011-01-01
Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.
Telemedicine and distributed medical intelligence.
Warner, D; Tichenor, J M; Balch, D C
1996-01-01
Recent trends in health care informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. The authors present a new model of health care information, distributed medical intelligence, which promotes the development of an integrative medical communication system addressing the process of providing expert medical knowledge to the point of need. The model incorporates audio, video, high-resolution still images, and virtual reality applications into an integrated medical communications network. Three components of the model (care portals, Docking Station, and the bridge) are described. The implementation of this model at the East Carolina University School of Medicine is also outlined.
Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied ...
USDA-ARS?s Scientific Manuscript database
Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied manure on undevelope...
Spatial perspectives in state-and-transition models: A missing link to land management?
USDA-ARS?s Scientific Manuscript database
Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, Hong Yi; Milne, Alice; Webster, Richard
2016-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, HongYi; Milne, Alice; Webster, Richard
2015-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
1994-03-01
We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu; Constantin, Dragos; Ganguly, Arundhuti
2015-08-15
Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp)more » and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1.0% [±0.6%] difference) and the 6-point case (0.7% [±0.6%] difference) performed best for method 1 and method 2, respectively. Moreover, method 2 demonstrated high-fidelity surface reconstruction with as few as 5 points, showing pixelwise absolute differences of 3.80 mGy (±0.32 mGy). Although the performance was shown to be sensitive to the phantom displacement from the isocenter, the performance changed by less than 2% for shifts up to 2 cm in the x- and y-axes in the central phantom plane. Conclusions: With as few as five points, method 1 and method 2 were able to compute the mean dose with reasonable accuracy, demonstrating differences of 1.7% (±1.2%) and 1.3% (±1.0%), respectively. A larger number of points do not necessarily guarantee better performance of the methods; optimal choice of point placement is necessary. The performance of the methods is sensitive to the alignment of the center of the body phantom relative to the isocenter. In body applications where dose distributions are important, method 2 is a better choice than method 1, as it reconstructs the dose surface with high fidelity, using as few as five points.« less
A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits
NASA Astrophysics Data System (ADS)
Huber, E.; Huggenberger, P.; Caers, J.
2016-12-01
There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, G.T.
1987-08-01
The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less
Pressure-Distribution Measurements on the Tail Surfaces of a Rotating Model of the Design BFW - M31
NASA Technical Reports Server (NTRS)
Kohler, M.; Mautz, W.
1949-01-01
In order to obtain insight into the flow conditions on tail surfaces on airplanes during spins, pressure-distribution measurements were performed on a rotating model of the design BFW-M31. For the time being, the tests were made for only one angle of attack (alpha = 60 degrees) and various angles of yaw and rudder angles. The results of these measurements are given; the construction of the model, and the test arrangement used are described. Measurements to be performed later and alterations planned in the test arrangement are pointed out.
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
NASA Astrophysics Data System (ADS)
Tudora, Anabella; Hambsch, Franz-Josef; Tobosaru, Viorel
2017-09-01
Prompt neutron multiplicity distributions ν(A) are required for prompt emission correction of double energy (2E) measurements of fission fragments to determine pre-neutron fragment properties. The lack of experimental ν(A) data especially at incident neutron energies (En) where the multi-chance fission occurs impose the use of ν(A) predicted by models. The Point-by-Point model of prompt emission is able to provide the individual ν(A) of the compound nuclei of the main and secondary nucleus chains undergoing fission at a given En. The total ν(A) is obtained by averaging these individual ν(A) over the probabilities of fission chances (expressed as total and partial fission cross-section ratios). An indirect validation of the total ν(A) results is proposed. At high En, above 70 MeV, the PbP results of individual ν(A) of the first few nuclei of the main and secondary nucleus chains exhibit an almost linear increase. This shape is explained by the damping of shell effects entering the super-fluid expression of the level density parameters. They tend to approach the asymptotic values for most of the fragments. This fact leads to a smooth and almost linear increase of fragment excitation energy with the mass number that is reflected in a smooth and almost linear behaviour of ν(A).
GROUND WATER ISSUE: DENSE NONAQUEOUS PHASE LIQUIDS
This issue paper is a literature evaluation focusing on DNAPLs and provides an overview from a conceptual fate and transport point of view of DNAPL phase distribution, monitoring, site characterization, remediation, and modeling.
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
Villalar, J L; Arredondo, M T; Meneu, T; Traver, V; Cabrera, M F; Guillen, S; Del Pozo, F
2002-01-01
Centralized testing demands costly laboratories, which are inefficient and may provide poor services. Recent advances make it feasible to move clinical testing nearer to patients and the requesting physicians, thus reducing the time to treatment. Internet technologies can be used to create a virtual laboratory information system in a distributed health-care environment. This allows clinical testing to be transferred to a cooperative scheme of several point-of-care testing (POCT) nodes. Two pilot virtual laboratories were established, one in Italy (AUSL Modena) and one in Greece (Athens Medical Centre). They were constructed on a three-layer model to allow both technical and clinical verification. Different POCT devices were connected. The pilot sites produced good preliminary results in relation to user acceptance, efficiency, convenience and costs. Decentralized laboratories can be expected to become cost-effective.
Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling
NASA Astrophysics Data System (ADS)
Kaur, Amandeep; Singh Mann, Kulwinder, Dr.
2018-01-01
Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.
Ultrasound beam transmission using a discretely orthogonal Gaussian aperture basis
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2018-04-01
Work is reported on development of a computational model for ultrasound beam transmission at an arbitrary geometry transmission interface for generally anisotropic materials. The work addresses problems encountered when the fundamental assumptions of ray theory do not hold, thereby introducing errors into ray-theory-based transmission models. Specifically, problems occur when the asymptotic integral analysis underlying ray theory encounters multiple stationary phase points in close proximity, due to focusing caused by concavity on either the entry surface or a material slowness surface. The approach presented here projects integrands over both the transducer aperture and the entry surface beam footprint onto a Gaussian-derived basis set, thereby distributing the integral over a summation of second-order phase integrals which are amenable to single stationary phase point analysis. Significantly, convergence is assured provided a sufficiently fine distribution of basis functions is used.
Prediction future asset price which is non-concordant with the historical distribution
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah
2015-12-01
This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
NASA Astrophysics Data System (ADS)
Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.
2011-12-01
Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.
Mapping risk of plague in Qinghai-Tibetan Plateau, China.
Qian, Quan; Zhao, Jian; Fang, Liqun; Zhou, Hang; Zhang, Wenyi; Wei, Lan; Yang, Hong; Yin, Wenwu; Cao, Wuchun; Li, Qun
2014-07-10
Qinghai-Tibetan Plateau of China is known to be the plague endemic region where marmot (Marmota himalayana) is the primary host. Human plague cases are relatively low incidence but high mortality, which presents unique surveillance and public health challenges, because early detection through surveillance may not always be feasible and infrequent clinical cases may be misdiagnosed. Based on plague surveillance data and environmental variables, Maxent was applied to model the presence probability of plague host. 75% occurrence points were randomly selected for training model, and the rest 25% points were used for model test and validation. Maxent model performance was measured as test gain and test AUC. The optimal probability cut-off value was chosen by maximizing training sensitivity and specificity simultaneously. We used field surveillance data in an ecological niche modeling (ENM) framework to depict spatial distribution of natural foci of plague in Qinghai-Tibetan Plateau. Most human-inhabited areas at risk of exposure to enzootic plague are distributed in the east and south of the Plateau. Elevation, temperature of land surface and normalized difference vegetation index play a large part in determining the distribution of the enzootic plague. This study provided a more detailed view of spatial pattern of enzootic plague and human-inhabited areas at risk of plague. The maps could help public health authorities decide where to perform plague surveillance and take preventive measures in Qinghai-Tibetan Plateau.
Higdon, Jeff W; Ferguson, Steven H
2009-07-01
Killer whales (Orcinus orca) are major predators that may reshape marine ecosystems via top-down forcing. Climate change models predict major reductions in sea ice with the subsequent expectation for readjustments of species' distribution and abundance. Here, we measure changes in killer whale distribution in the Hudson Bay region with decreasing sea ice as an example of global readjustments occurring with climate change. We summarize records of killer whales in Hudson Bay, Hudson Strait, and Foxe Basin in the eastern Canadian Arctic and relate them to an historical sea ice data set while accounting for spatial and temporal autocorrelation in the data. We find evidence for "choke points," where sea ice inhibits killer whale movement, thereby creating restrictions to their Arctic distribution. We hypothesize that a threshold exists in seasonal sea ice concentration within these choke points that results in pulses in advancements in distribution of an ice-avoiding predator. Hudson Strait appears to have been a significant sea ice choke point that opened up .approximately 50 years ago allowing for an initial punctuated appearance of killer whales followed by a gradual advancing distribution within the entire Hudson Bay region. Killer whale sightings have increased exponentially and are now reported in the Hudson Bay region every summer. We predict that other choke points will soon open up with continued sea ice melt producing punctuated predator-prey trophic cascades across the Arctic.
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L
2010-11-01
We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.
A Direction Finding Method with A 3-D Array Based on Aperture Synthesis
NASA Astrophysics Data System (ADS)
Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng
2018-01-01
Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
NASA Astrophysics Data System (ADS)
Zhang, Hongbo; Ao, Tianqi; Gusyev, Maksym; Ishidaira, Hiroshi; Magome, Jun; Takeuchi, Kuniyoshi
2018-06-01
Nitrogen and phosphorus concentrations in Chinese river catchments are contributed by agricultural non-point and industrial point sources causing deterioration of river water quality and degradation of ecosystem functioning for a long distance downstream. To evaluate these impacts, a distributed pollutant transport module was developed on the basis of BTOPMC (Block-Wise Use of TOPMODEL with Muskingum-Cunge Method), a grid-based distributed hydrological model, using the water flow routing process of BTOPMC as the carrier of pollutant transport due a direct runoff. The pollutant flux at each grid is simulated based on mass balance of pollutants within the grid and surface water transport of these pollutants occurs between grids in the direction of the water flow on daily time steps. The model was tested in the study area of the Lu county area situated in the Laixi River basin in the Sichuan province of southwest China. The simulated concentrations of nitrogen and phosphorus are compared with the available monthly data at several water quality stations. These results demonstrate a greater pollutant concentration in the beginning of high flow period indicating the main mechanism of pollution transport. From these preliminary results, we suggest that the distributed pollutant transport model can reflect the characteristics of the pollutant transport and reach the expected target.
Monte Carlo simulation for light propagation in 3D tooth model
NASA Astrophysics Data System (ADS)
Fu, Yongji; Jacques, Steven L.
2011-03-01
Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.
Wang, Shuang; Jiang, Xiaoqian; Wu, Yuan; Cui, Lijuan; Cheng, Samuel; Ohno-Machado, Lucila
2013-06-01
We developed an EXpectation Propagation LOgistic REgRession (EXPLORER) model for distributed privacy-preserving online learning. The proposed framework provides a high level guarantee for protecting sensitive information, since the information exchanged between the server and the client is the encrypted posterior distribution of coefficients. Through experimental results, EXPLORER shows the same performance (e.g., discrimination, calibration, feature selection, etc.) as the traditional frequentist logistic regression model, but provides more flexibility in model updating. That is, EXPLORER can be updated one point at a time rather than having to retrain the entire data set when new observations are recorded. The proposed EXPLORER supports asynchronized communication, which relieves the participants from coordinating with one another, and prevents service breakdown from the absence of participants or interrupted communications. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Comparison of two paradigms for distributed shared memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.
1990-08-01
The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less
An Improved Inventory Control Model for the Brazilian Navy Supply System
2001-12-01
Portuguese Centro de Controle de Inventario da Marinha, the Brazilian Navy Inventory Control Point (ICP) developed an empirical model called SPAADA...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Approved for public release; distribution is unlimited AN IMPROVED INVENTORY CONTROL ...AN IMPROVED INVENTORY CONTROL MODEL FOR THE BRAZILIAN NAVY SUPPLY SYSTEM Contract Number Grant Number Program Element Number Author(s) Moreira
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
Gravity fields of the solar system
NASA Technical Reports Server (NTRS)
Zendell, A.; Brown, R. D.; Vincent, S.
1975-01-01
The most frequently used formulations of the gravitational field are discussed and a standard set of models for the gravity fields of the earth, moon, sun, and other massive bodies in the solar system are defined. The formulas are presented in standard forms, some with instructions for conversion. A point-source or inverse-square model, which represents the external potential of a spherically symmetrical mass distribution by a mathematical point mass without physical dimensions, is considered. An oblate spheroid model is presented, accompanied by an introduction to zonal harmonics. This spheroid model is generalized and forms the basis for a number of the spherical harmonic models which were developed for the earth and moon. The triaxial ellipsoid model is also presented. These models and their application to space missions are discussed.
The noisy voter model on complex networks.
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2016-04-20
We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity--variance of the underlying degree distribution--has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.
Perturbed-input-data ensemble modeling of magnetospheric dynamics
NASA Astrophysics Data System (ADS)
Morley, S.; Steinberg, J. T.; Haiducek, J. D.; Welling, D. T.; Hassan, E.; Weaver, B. P.
2017-12-01
Many models of Earth's magnetospheric dynamics - including global magnetohydrodynamic models, reduced complexity models of substorms and empirical models - are driven by solar wind parameters. To provide consistent coverage of the upstream solar wind these measurements are generally taken near the first Lagrangian point (L1) and algorithmically propagated to the nose of Earth's bow shock. However, the plasma and magnetic field measured near L1 is a point measurement of an inhomogeneous medium, so the individual measurement may not be sufficiently representative of the broader region near L1. The measured plasma may not actually interact with the Earth, and the solar wind structure may evolve between L1 and the bow shock. To quantify uncertainties in simulations, as well as to provide probabilistic forecasts, it is desirable to use perturbed input ensembles of magnetospheric and space weather forecasting models. By using concurrent measurements of the solar wind near L1 and near the Earth, we construct a statistical model of the distributions of solar wind parameters conditioned on their upstream value. So that we can draw random variates from our model we specify the conditional probability distributions using Kernel Density Estimation. We demonstrate the utility of this approach using ensemble runs of selected models that can be used for space weather prediction.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
A user-friendly modified pore-solid fractal model
Ding, Dian-yuan; Zhao, Ying; Feng, Hao; Si, Bing-cheng; Hill, Robert Lee
2016-01-01
The primary objective of this study was to evaluate a range of calculation points on water retention curves (WRC) instead of the singularity point at air-entry suction in the pore-solid fractal (PSF) model, which additionally considered the hysteresis effect based on the PSF theory. The modified pore-solid fractal (M-PSF) model was tested using 26 soil samples from Yangling on the Loess Plateau in China and 54 soil samples from the Unsaturated Soil Hydraulic Database. The derivation results showed that the M-PSF model is user-friendly and flexible for a wide range of calculation point options. This model theoretically describes the primary differences between the soil moisture desorption and the adsorption processes by the fractal dimensions. The M-PSF model demonstrated good performance particularly at the calculation points corresponding to the suctions from 100 cm to 1000 cm. Furthermore, the M-PSF model, used the fractal dimension of the particle size distribution, exhibited an accepted performance of WRC predictions for different textured soils when the suction values were ≥100 cm. To fully understand the function of hysteresis in the PSF theory, the role of allowable and accessible pores must be examined. PMID:27996013
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Modeling and comparative study of fluid velocities in heterogeneous rocks
NASA Astrophysics Data System (ADS)
Hingerl, Ferdinand F.; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally
2013-04-01
Detailed knowledge of the distribution of effective porosity and fluid velocities in heterogeneous rock samples is crucial for understanding and predicting spatially resolved fluid residence times and kinetic reaction rates of fluid-rock interactions. The applicability of conventional MRI techniques to sedimentary rocks is limited by internal magnetic field gradients and short spin relaxation times. The approach developed at the UNB MRI Centre combines the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme and three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE). These methods were designed to reduce the errors due to effects of background gradients and fast transverse relaxation. SPRITE is largely immune to time-evolution effects resulting from background gradients, paramagnetic impurities and chemical shift. Using these techniques quantitative 3D porosity maps as well as single-phase fluid velocity fields in sandstone core samples were measured. Using a new Magnetic Resonance Imaging technique developed at the MRI Centre at UNB, we created 3D maps of porosity distributions as well as single-phase fluid velocity distributions of sandstone rock samples. Then, we evaluated the applicability of the Kozeny-Carman relationship for modeling measured fluid velocity distributions in sandstones samples showing meso-scale heterogeneities using two different modeling approaches. The MRI maps were used as reference points for the modeling approaches. For the first modeling approach, we applied the Kozeny-Carman relationship to the porosity distributions and computed respective permeability maps, which in turn provided input for a CFD simulation - using the Stanford CFD code GPRS - to compute averaged velocity maps. The latter were then compared to the measured velocity maps. For the second approach, the measured velocity distributions were used as input for inversely computing permeabilities using the GPRS CFD code. The computed permeabilities were then correlated with the ones based on the porosity maps and the Kozeny-Carman relationship. The findings of the comparative modeling study are discussed and its potential impact on the modeling of fluid residence times and kinetic reaction rates of fluid-rock interactions in rocks containing meso-scale heterogeneities are reviewed.
Snowmelt Runoff Model in Japan
NASA Technical Reports Server (NTRS)
Ishihara, K.; Nishimura, Y.; Takeda, K.
1985-01-01
The preliminary Japanese snowmelt runoff model was modified so that all the input variables arc of the antecedent days and the inflow of the previous day is taken into account. A few LANDSAT images obtained in the past were effectively used to verify and modify the depletion curve induced from the snow water equivalent distribution at maximum stage and the accumulated degree days at one representative point selected in the basin. Together with the depletion curve, the relationship between the basin ide daily snowmelt amount and the air temperature at the point above are exhibited homograph form for the convenience of the model user. The runoff forecasting procedure is summarized.
Comprehensive overview of the Point-by-Point model of prompt emission in fission
NASA Astrophysics Data System (ADS)
Tudora, A.; Hambsch, F.-J.
2017-08-01
The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for which the experimental information is completely missing. The PbP treatment can also provide input parameters of the improved Los Alamos model with non-equal residual temperature distributions recently reported by Madland and Kahler, especially for fissioning nuclei without any experimental information concerning the prompt emission.
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
16 CFR 305.19 - Promotional material displayed or distributed at point of sale.
Code of Federal Regulations, 2010 CFR
2010-01-01
... distributed at point of sale. 305.19 Section 305.19 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... distributed at point of sale. (a)(1) Any manufacturer, distributor, retailer or private labeler who prepares printed material for display or distribution at point of sale concerning a covered product (except...
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro
2014-08-01
The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.
Mapping local and global variability in plant trait distributions
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...
2017-12-01
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
2012-03-05
subsonic corona below the critical point, resulting in an increased scale height and mass flux, while keeping the kinetic energy of the flow fairly...Approved for public release; distribution is unlimited. tubes with small expansion factors the heating occurs in the supersonic corona, where the energy ...goes into the kinetic energy of the solar wind, increasing the flow speed [Leer and Holzer, 1980; Pneuman, 1980]. Using this model and a sim- plified
NASA Astrophysics Data System (ADS)
Werner, Micha; Westerhoff, Rogier; Moore, Catherine
2017-04-01
Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model constructed using the same base data and forced with the VCSN precipitation dataset. Results of the comparison of the rainfall products show that there are significant differences in precipitation volume between the forcing products; in the order of 20% at most points. Even more significant differences can be seen, however, in the distribution of precipitation. For the VCSN data wet days (defined as >0.1mm precipitation) occur on some 20-30% of days (depending on location). This is reasonably reflected in the TRMM and CHIRPS data, while for the re-analysis based products some 60%to 80% of days are wet, albeit at lower intensities. These differences are amplified in the recharge estimates. At most points, volumetric differences are in the order of 40-60%, though difference may range into several orders of magnitude. The frequency distributions of recharge also differ significantly, with recharge over 0.1 mm occurring on 4-6% of days for the VCNS, CHIRPS, and TRMM datasets, but up to the order of 12% of days for the re-analysis data. Comparison against the lysimeter data show estimates to be reasonable, in particular for the reference datasets. Surprisingly some estimates of the lower resolution re-analysis datasets are reasonable, though this does seem to be due to lower recharge being compensated by recharge occurring more frequently. These results underline the importance of correct representation of rainfall volumes, as well as of distribution, particularly when evaluating possible changes to for example changes in precipitation intensity and volume. This holds for precipitation data derived from satellite based and re-analysis products, but also for interpolated data from gauges, where the distribution of intensities is strongly influenced by the interpolation process.
Hu, L; Zhao, Z; Song, J; Fan, Y; Jiang, W; Chen, J
2001-02-01
The distribution of stress on the surface of condylar cartilage was investigated. Three-dimensional model of the 'Temporomandibular joint mandible Herbst appliance system' was set up by SUPER SAP software (version 9.3). On this model, various bite reconstruction was simulated according to specified advanced displacement and vertical bite opening. The distribution of maximum and minimum principal stress on the surface of condylar cartilage were computerized and analyzed. When Herbst appliance drove the mandible forward, the anterior condyle surface was compressed while the posterior surface was drawn. The trend of stress on the same point on the condyle surface was consistent in various reconstruction conditions, but the trend of stress on various point were different in same reconstruction conditions. All five groups of bite reconstruction (3-7 mm advancement, 4-2 mm vertical bite opening of the mandible) designed by this study can be selected in clinic according to the patient's capability of adaptation, the extent of malocclusion and the potential and direction of growth.
Idealized models of the joint probability distribution of wind speeds
NASA Astrophysics Data System (ADS)
Monahan, Adam H.
2018-05-01
The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.
NASA Astrophysics Data System (ADS)
Ackerman, T. R.; Pizzuto, J. E.
2016-12-01
Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.
ERIC Educational Resources Information Center
Roberts, James S.; Donoghue, John R.; Laughlin, James E.
The generalized graded unfolding model (J. Roberts, J. Donoghue, and J. Laughlin, 1998, 1999) is an item response theory model designed to unfold polytomous responses. The model is based on a proximity relation that postulates higher levels of expected agreement with a given statement to the extent that a respondent is located close to the…
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
Yamaoka, Kiyoshi; Takakura, Yoshinobu
2004-12-01
An attempt has been made to review the nonlinearities in the disposition in vitro, in situ, in loci and in vivo mainly from a theoretical point of view. Parallel Michaelis-Menten and linear (first-order) eliminations are often observed in the cellular uptake, metabolism and efflux of drugs. The well-stirred and parallel-tube models are mainly adopted under steady-state conditions in perfusion experiments, whereas distribution, tank-in-series and dispersion models are often used under nonsteady-state conditions with a pulse input. The analysis of the nonlinear local disposition in loci is reviewed from two points of view, namely an indirect method involving physiologically based pharmacokinetics (PBPK) and a direct (two or three samplings) method using live animals. The nonlinear global pharmacokinetics in vivo is reviewed with regard to absorption, elimination (metabolism and excretion) and distribution.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Bilinear effect in complex systems
NASA Astrophysics Data System (ADS)
Lam, Lui; Bellavia, David C.; Han, Xiao-Pu; Alston Liu, Chih-Hui; Shu, Chang-Qing; Wei, Zhengjin; Zhou, Tao; Zhu, Jichen
2010-09-01
The distribution of the lifetime of Chinese dynasties (as well as that of the British Isles and Japan) in a linear Zipf plot is found to consist of two straight lines intersecting at a transition point. This two-section piecewise-linear distribution is different from the power law or the stretched exponent distribution, and is called the Bilinear Effect for short. With assumptions mimicking the organization of ancient Chinese regimes, a 3-layer network model is constructed. Numerical results of this model show the bilinear effect, providing a plausible explanation of the historical data. The bilinear effect in two other social systems is presented, indicating that such a piecewise-linear effect is widespread in social systems.
NASA Astrophysics Data System (ADS)
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2017-03-01
This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.
Classification framework for partially observed dynamical systems
NASA Astrophysics Data System (ADS)
Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira
2017-04-01
We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Xiang, Jingen
X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N_{H,WD01} = (0.720±0.009) × N_{H,abs} + (0.051±0.013) and N_{H, MRN} = (1.156±0.016) × N_{H,abs} + (0.062±0.024) in the units 10^{22} cm^{-2}. Then the correlation between FHI and N_{H} is obtained. Both WD01 model and MRN model fits show that the scattering dust density very close to these sources is much higher than the normal interstellar medium and we consider it is the evidence of molecular clouds around these X-ray binaries. We also find that there is the linear correlation between the effective distance through the galactic dust layer and hydrogen scattering olumn density N_{H} excluding the one in x=0.99-1.0 but the correlation does not exist between he effective distance and the N_{H} in x=0.99-1.0. It shows that the dust nearby the X-ray sources is not the dust from galactic disk. Then we estimate the structure and density of the stellar wind around the special X-ray pulsars Vela X-1 and GX 301-2. Finally we discuss the possibility of probing the three dimensional structure of the interstellar using the X-ray halos of the transient sources, probing the spatial distributions of interstellar dust medium nearby the point sources, even the structure of the stellar winds using higher angular resolution X-ray dust scattering halos and testing the model that the black hole can be formed from the direct collapse of a massive star without supernova using the statistical distribution of the dust density nearby the X-ray binaries.
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
NASA Astrophysics Data System (ADS)
Massip, Florian; Arndt, Peter F.
2013-04-01
Recently, an enrichment of identical matching sequences has been found in many eukaryotic genomes. Their length distribution exhibits a power law tail raising the question of what evolutionary mechanism or functional constraints would be able to shape this distribution. Here we introduce a simple and evolutionarily neutral model, which involves only point mutations and segmental duplications, and produces the same statistical features as observed for genomic data. Further, we extend a mathematical model for random stick breaking to analytically show that the exponent of the power law tail is -3 and universal as it does not depend on the microscopic details of the model.
NASA Astrophysics Data System (ADS)
Famiglietti, C.; Fisher, J.; Halverson, G. H.
2017-12-01
This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.
Dudov, S V
2016-01-01
On the basis of maximum entropy method embedded in MaxEnt software, the cartographic models are designed for spatial distribution of 63 species of vascular plants inhabiting low mountain belt of the Tukuringra Range. Initial data for modeling were actual points of a species occurrence, data on remote sensing (multispectral space snapshots by Landsat), and a digital topographic model. It is found out that the structure of factors contributing to the model is related to species ecological amplitude. The distribution of stenotopic species is determined, mainly, by the topography, which thermal and humidity conditions of habitats are associated with. To the models for eurytopic species, variables formed on the basis of remote sensing contribute significantly, those variables encompassing the parameters of the soil-vegetable cover. In course of the obtained models analyzing, three principal groups of species are revealed that have similar distribution pattern. Species of the first group are restricted in their distribution by the slopes of the. River Zeya and River Giluy gorges. Species of the second group are associated with the southern macroslope of the range and with southern slopes of large rivers' valleys. The third group incorporates those species that are distributed over the whole territory under study.
Sensor Data Distribution With Robustness and Reliability: Toward Distributed Components Model
NASA Technical Reports Server (NTRS)
Alena, Richard L.; Lee, Charles
2005-01-01
In planetary surface exploration mission, sensor data distribution is required in many aspects, for example, in navigation, scheduling, planning, monitoring, diagnostics, and automation of the field tasks. The challenge is to distribute such data in the robust and reliable way so that we can minimize the errors caused by miscalculations, and misjudgments that based on the error data input in the mission. The ad-hoc wireless network on planetary surface is not constantly connected because of the nature of the rough terrain and lack of permanent establishments on the surface. There are some disconnected moments that the computation nodes will re-associate with different repeaters or access points until connections are reestablished. Such a nature requires our sensor data distribution software robust and reliable with ability to tolerant disconnected moments. This paper presents a distributed components model as a framework to accomplish such tasks. The software is written in Java and utilized the available Java Message Services schema and the Boss implementation. The results of field experimentations show that the model is very effective in completing the tasks.
Joint surface modeling with thin-plate splines.
Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D
1999-10-01
Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Crack problem in superconducting cylinder with exponential distribution of critical-current density
NASA Astrophysics Data System (ADS)
Zhao, Yufeng; Xu, Chi; Shi, Liang
2018-04-01
The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.
Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function
NASA Astrophysics Data System (ADS)
Vijayan, Sarath; Xiong, Zhenyu; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.
2017-03-01
Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF's were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF's for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.
Mouly, Damien; Joulin, Eric; Rosin, Christophe; Beaudeau, Pascal; Zeghnoun, Abdelkrim; Olszewski-Ortar, Agnès; Munoz, Jean François; Welté, Bénédicte; Joyeux, Michel; Seux, René; Montiel, Antoine; Rodriguez, M J
2010-10-01
Epidemiological studies have demonstrated that chlorination by-products in drinking water may cause some types of cancer in humans. However, due to differences in methodology between the various studies, it is not possible to establish a dose-response relationship. This shortcoming is due primarily to uncertainties about how exposure is measured-made difficult by the great number of compounds present-the exposure routes involved and the variation in concentrations in water distribution systems. This is especially true for trihalomethanes for which concentrations can double between the water treatment plant and the consumer tap. The aim of this study is to describe the behaviour of trihalomethanes in three French water distribution systems and develop a mathematical model to predict concentrations in the water distribution system using data collected from treated water at the plant (i.e. the entrance of the distribution system). In 2006 and 2007, samples were taken successively from treated water at the plant and at several points in the water distribution system in three French cities. In addition to the concentrations of the four trihalomethanes (chloroform, dichlorobromomethane, chlorodibromomethane, bromoform), many other parameters involved in their formation that affect their concentration were also measured. The average trihalomethane concentration in the three water distribution systems ranged from 21.6 μg/L to 59.9 μg/L. The increase in trihalomethanes between the treated water at the plant and a given point in the water distribution system varied by a factor of 1.1-5.7 over all of the samples. A log-log linear regression model was constructed to predict THM concentrations in the water distribution system. The five variables used were trihalomethane concentration and free residual chlorine for treated water at the plant, two variables that characterize the reactivity of organic matter (specific UV absorbance (SUVA), an indicator developed for the free chlorine consumption in the treatment plant before distribution δ) and water residence time in the distribution system. French regulations impose a minimum trihalomethane level for drinking water and most tests are performed on treated water at the plant. Applied in this context, the model developed here helps better to understand trihalomethane exposure in the French population, particularly useful for epidemiological studies. Copyright © 2010 Elsevier Ltd. All rights reserved.
External calibration of polarimetric radars using point and distributed targets
NASA Technical Reports Server (NTRS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-01-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
External calibration of polarimetric radars using point and distributed targets
NASA Astrophysics Data System (ADS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-08-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.
Uribe-Sánchez, Andrés; Savachkin, Alex
2011-01-01
As recently pointed out by the Institute of Medicine, the existing pandemic mitigation models lack the dynamic decision support capability. We develop a large-scale simulation-driven optimization model for generating dynamic predictive distribution of vaccines and antivirals over a network of regional pandemic outbreaks. The model incorporates measures of morbidity, mortality, and social distancing, translated into the cost of lost productivity and medical expenses. The performance of the strategy is compared to that of the reactive myopic policy, using a sample outbreak in Fla, USA, with an affected population of over four millions. The comparison is implemented at different levels of vaccine and antiviral availability and administration capacity. Sensitivity analysis is performed to assess the impact of variability of some critical factors on policy performance. The model is intended to support public health policy making for effective distribution of limited mitigation resources. PMID:23074658
The bingo model of survivorship: 1. probabilistic aspects.
Murphy, E A; Trojak, J E; Hou, W; Rohde, C A
1981-01-01
A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.
NASA Astrophysics Data System (ADS)
Deng, Zhipeng; Lei, Lin; Zhou, Shilin
2015-10-01
Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.
Landslide susceptibility analysis with logistic regression model based on FCM sampling strategy
NASA Astrophysics Data System (ADS)
Wang, Liang-Jie; Sawada, Kazuhide; Moriguchi, Shuji
2013-08-01
Several mathematical models are used to predict the spatial distribution characteristics of landslides to mitigate damage caused by landslide disasters. Although some studies have achieved excellent results around the world, few studies take the inter-relationship of the selected points (training points) into account. In this paper, we present the Fuzzy c-means (FCM) algorithm as an optimal method for choosing the appropriate input landslide points as training data. Based on different combinations of the Fuzzy exponent (m) and the number of clusters (c), five groups of sampling points were derived from formal seed cells points and applied to analyze the landslide susceptibility in Mizunami City, Gifu Prefecture, Japan. A logistic regression model is applied to create the models of the relationships between landslide-conditioning factors and landslide occurrence. The pre-existing landslide bodies and the area under the relative operative characteristic (ROC) curve were used to evaluate the performance of all the models with different m and c. The results revealed that Model no. 4 (m=1.9, c=4) and Model no. 5 (m=1.9, c=5) have significantly high classification accuracies, i.e., 90.0%. Moreover, over 30% of the landslide bodies were grouped under the very high susceptibility zone. Otherwise, Model no. 4 and Model no. 5 had higher area under the ROC curve (AUC) values, which were 0.78 and 0.79, respectively. Therefore, Model no. 4 and Model no. 5 offer better model results for landslide susceptibility mapping. Maps derived from Model no. 4 and Model no. 5 would offer the local authorities crucial information for city planning and development.
Comparison of results from simple expressions for MOSFET parameter extraction
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Lin, Y.-S.
1988-01-01
In this paper results are compared from a parameter extraction procedure applied to the linear, saturation, and subthreshold regions for enhancement-mode MOSFETs fabricated in a 3-micron CMOS process. The results indicate that the extracted parameters differ significantly depending on the extraction algorithm and the distribution of I-V data points. It was observed that KP values vary by 30 percent, VT values differ by 50 mV, and Delta L values differ by 1 micron. Thus for acceptance of wafers from foundries and for modeling purposes, the extraction method and data point distribution must be specified. In this paper measurement and extraction procedures that will allow a consistent evaluation of measured parameters are discussed.
Qinghua, Zhao; Jipeng, Li; Yongxing, Zhang; He, Liang; Xuepeng, Wang; Peng, Yan; Xiaofeng, Wu
2015-04-07
To employ three-dimensional finite element modeling and biomechanical simulation for evaluating the stability and stress conduction of two postoperative internal fixed modeling-multilevel posterior instrumentation ( MPI) and MPI with anterior instrumentation (MPAI) with neck-thoracic vertebral tumor en bloc resection. Mimics software and computed tomography (CT) images were used to establish the three-dimensional (3D) model of vertebrae C5-T2 and simulated the C7 en bloc vertebral resection for MPI and MPAI modeling. Then the statistics and images were transmitted into the ANSYS finite element system and 20N distribution load (simulating body weight) and applied 1 N · m torque on neutral point for simulating vertebral displacement and stress conduction and distribution of motion mode, i. e. flexion, extension, bending and rotating. With a better stability, the displacement of two adjacent vertebral bodies of MPI and MPAI modeling was less than that of complete vertebral modeling. No significant differences existed between each other. But as for stress shielding effect reduction, MPI was slightly better than MPAI. From biomechanical point of view, two internal instrumentations with neck-thoracic tumor en bloc resection may achieve an excellent stability with no significant differences. But with better stress conduction, MPI is more advantageous in postoperative reconstruction.
Short-ranged memory model with preferential growth
NASA Astrophysics Data System (ADS)
Schaigorodsky, Ana L.; Perotti, Juan I.; Almeira, Nahuel; Billoni, Orlando V.
2018-02-01
In this work we introduce a variant of the Yule-Simon model for preferential growth by incorporating a finite kernel to model the effects of bounded memory. We characterize the properties of the model combining analytical arguments with extensive numerical simulations. In particular, we analyze the lifetime and popularity distributions by mapping the model dynamics to corresponding Markov chains and branching processes, respectively. These distributions follow power laws with well-defined exponents that are within the range of the empirical data reported in ecologies. Interestingly, by varying the innovation rate, this simple out-of-equilibrium model exhibits many of the characteristics of a continuous phase transition and, around the critical point, it generates time series with power-law popularity, lifetime and interevent time distributions, and nontrivial temporal correlations, such as a bursty dynamics in analogy with the activity of solar flares. Our results suggest that an appropriate balance between innovation and oblivion rates could provide an explanatory framework for many of the properties commonly observed in many complex systems.
Short-ranged memory model with preferential growth.
Schaigorodsky, Ana L; Perotti, Juan I; Almeira, Nahuel; Billoni, Orlando V
2018-02-01
In this work we introduce a variant of the Yule-Simon model for preferential growth by incorporating a finite kernel to model the effects of bounded memory. We characterize the properties of the model combining analytical arguments with extensive numerical simulations. In particular, we analyze the lifetime and popularity distributions by mapping the model dynamics to corresponding Markov chains and branching processes, respectively. These distributions follow power laws with well-defined exponents that are within the range of the empirical data reported in ecologies. Interestingly, by varying the innovation rate, this simple out-of-equilibrium model exhibits many of the characteristics of a continuous phase transition and, around the critical point, it generates time series with power-law popularity, lifetime and interevent time distributions, and nontrivial temporal correlations, such as a bursty dynamics in analogy with the activity of solar flares. Our results suggest that an appropriate balance between innovation and oblivion rates could provide an explanatory framework for many of the properties commonly observed in many complex systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Xuehang; Chen, Xingyuan; Ye, Ming
2015-07-01
This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less
Fienen, Michael N.; Selbig, William R.
2012-01-01
A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.
Verbruggen, Heroen; Tyberghein, Lennert; Belton, Gareth S.; Mineur, Frederic; Jueterbock, Alexander; Hoarau, Galice; Gurgel, C. Frederico D.; De Clerck, Olivier
2013-01-01
The utility of species distribution models for applications in invasion and global change biology is critically dependent on their transferability between regions or points in time, respectively. We introduce two methods that aim to improve the transferability of presence-only models: density-based occurrence thinning and performance-based predictor selection. We evaluate the effect of these methods along with the impact of the choice of model complexity and geographic background on the transferability of a species distribution model between geographic regions. Our multifactorial experiment focuses on the notorious invasive seaweed Caulerpacylindracea (previously Caulerpa racemosa var. cylindracea ) and uses Maxent, a commonly used presence-only modeling technique. We show that model transferability is markedly improved by appropriate predictor selection, with occurrence thinning, model complexity and background choice having relatively minor effects. The data shows that, if available, occurrence records from the native and invaded regions should be combined as this leads to models with high predictive power while reducing the sensitivity to choices made in the modeling process. The inferred distribution model of Caulerpacylindracea shows the potential for this species to further spread along the coasts of Western Europe, western Africa and the south coast of Australia. PMID:23950789
2018-01-01
We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129
Generic solar photovoltaic system dynamic simulation model specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Abraham; Behnke, Michael Robert; Elliott, Ryan Thomas
This document is intended to serve as a specification for generic solar photovoltaic (PV) system positive-sequence dynamic models to be implemented by software developers and approved by the WECC MVWG for use in bulk system dynamic simulations in accordance with NERC MOD standards. Two specific dynamic models are included in the scope of this document. The first, a Central Station PV System model, is intended to capture the most important dynamic characteristics of large scale (> 10 MW) PV systems with a central Point of Interconnection (POI) at the transmission level. The second, a Distributed PV System model, is intendedmore » to represent an aggregation of smaller, distribution-connected systems that comprise a portion of a composite load that might be modeled at a transmission load bus.« less
Acquisition, representation, and transfer of models of visuo-motor error
Zhang, Hang; Kulsa, Mila Kirstie C.; Maloney, Laurence T.
2015-01-01
We examined how human subjects acquire and represent models of visuo-motor error and how they transfer information about visuo-motor error from one task to a closely related one. The experiment consisted of three phases. In the training phase, subjects threw beanbags underhand towards targets displayed on a wall-mounted touch screen. The distribution of their endpoints was a vertically elongated bivariate Gaussian. In the subsequent choice phase, subjects repeatedly chose which of two targets varying in shape and size they would prefer to attempt to hit. Their choices allowed us to investigate their internal models of visuo-motor error distribution, including the coordinate system in which they represented visuo-motor error. In the transfer phase, subjects repeated the choice phase from a different vantage point, the same distance from the screen but with the throwing direction shifted 45°. From the new vantage point, visuo-motor error was effectively expanded horizontally by . We found that subjects incorrectly assumed an isotropic distribution in the choice phase but that the anisotropy they assumed in the transfer phase agreed with an objectively correct transfer. We also found that the coordinate system used in coding two-dimensional visuo-motor error in the choice phase was effectively one-dimensional. PMID:26057549
Modeling of mineral dust in the atmosphere: Sources, transport, and optical thickness
NASA Technical Reports Server (NTRS)
Tegen, Ina; Fung, Inez
1994-01-01
A global three-dimensional model of the atmospheric mineral dust cycle is developed for the study of its impact on the radiative balance of the atmosphere. The model includes four size classes of minearl dust, whose source distributions are based on the distributions of vegetation, soil texture and soil moisture. Uplift and deposition are parameterized using analyzed winds and rainfall statistics that resolve high-frequency events. Dust transport in the atmosphere is simulated with the tracer transport model of the Goddard Institute for Space Studies. The simulated seasonal variations of dust concentrations show general reasonable agreement with the observed distributions, as do the size distributions at several observing sites. The discrepancies between the simulated and the observed dust concentrations point to regions of significant land surface modification. Monthly distribution of aerosol optical depths are calculated from the distribution of dust particle sizes. The maximum optical depth due to dust is 0.4-0.5 in the seasonal mean. The main uncertainties, about a factor of 3-5, in calculating optical thicknesses arise from the crude resolution of soil particle sizes, from insufficient constraint by the total dust loading in the atmosphere, and from our ignorance about adhesion, agglomeration, uplift, and size distributions of fine dust particles (less than 1 micrometer).
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
Analysis of Product Distribution Strategy in Digital Publishing Industry Based on Game-Theory
NASA Astrophysics Data System (ADS)
Xu, Li-ping; Chen, Haiyan
2017-04-01
The digital publishing output increased significantly year by year. It has been the most vigorous point of economic growth and has been more important to press and publication industry. Its distribution channel has been diversified, which is different from the traditional industry. A deep research has been done in digital publishing industry, for making clear of the constitution of the industry chain and establishing the model of industry chain. The cooperative and competitive relationship between different distribution channels have been analyzed basing on a game-theory. By comparing the distribution quantity and the market size between the static distribution strategy and dynamic distribution strategy, we get the theory evidence about how to choose the distribution strategy to get the optimal benefit.
Distributed Multiple Access Control for the Wireless Mesh Personal Area Networks
NASA Astrophysics Data System (ADS)
Park, Moo Sung; Lee, Byungjoo; Rhee, Seung Hyong
Mesh networking technologies for both high-rate and low-rate wireless personal area networks (WPANs) are under development by several standardization bodies. They are considering to adopt distributed TDMA MAC protocols to provide seamless user mobility as well as a good peer-to-peer QoS in WPAN mesh. It has been, however, pointed out that the absence of a central controller in the wireless TDMA MAC may cause a severe performance degradation: e. g., fair allocation, service differentiation, and admission control may be hard to achieve or can not be provided. In this paper, we suggest a new framework of resource allocation for the distributed MAC protocols in WPANs. Simulation results show that our algorithm achieves both a fair resource allocation and flexible service differentiations in a fully distributed way for mesh WPANs where the devices have high mobility and various requirements. We also provide an analytical modeling to discuss about its unique equilibrium and to compute the lengths of reserved time slots at the stable point.
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Zapp, E. N.; Wilson, J. W.; Cucinotta, F. A.
2001-01-01
The US Lab module of the International Space Station (ISS) is a primary working area where the crewmembers are expected to spend majority of their time. Because of the directionality of radiation fields caused by the Earth shadow, trapped radiation pitch angle distribution, and inherent variations in the ISS shielding, a model is needed to account for these local variations in the radiation distribution. We present the calculated radiation dose (rem/yr) values for over 3,000 different points in the working area of the Lab module and estimated radiation dose values for over 25,000 different points in the human body for a given ambient radiation environment. These estimated radiation dose values are presented in a three dimensional animated interactive visualization format. Such interactive animated visualization of the radiation distribution can be generated in near real-time to track changes in the radiation environment during the orbit precession of the ISS.
Development of Automated Objective Meteorological Techniques.
1980-11-30
differences are due largely to the nature and spatial distribution of the atmospheric data chosen as input for the model . The data for initial values and...technique. This report fo,-uses on results of theoretical investigations and data analyses performed oy SASC during the period May, 1979 to June, 1980...the sampling period, at a given point in space, the various size particles composing the particle distribution ex- hibit different velocities from each
1987-07-01
multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 4’.) ~a 4’ ., . ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4
Ray tracing the Wigner distribution function for optical simulations
NASA Astrophysics Data System (ADS)
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.
NASA Astrophysics Data System (ADS)
Krivoruchko, D. D.; Skrylev, A. V.
2018-01-01
The article deals with investigation of the excited states populations distribution of a low-temperature xenon plasma in the thruster with closed electron drift at 300 W operating conditions were investigated by laser-induced fluorescence (LIF) over the 350-1100 nm range. Seven xenon ions (Xe II) transitions were analyzed, while for neutral atoms (Xe I) just three transitions were explored, since the majority of Xe I emission falls into the ultraviolet or infrared part of the spectrum and are difficult to measure. The necessary spontaneous emission probabilities (Einstein coefficients) were calculated. Measurements of the excited state distribution were made for points (volume of about 12 mm3) all over the plane perpendicular to thruster axis in four positions on it (5, 10, 50 and 100 mm). Measured LIF signal intensity have differences for each location of researched point (due to anisotropy of thruster plume), however the structure of states populations distribution persisted at plume and is violated at the thruster exit plane and cathode area. Measured distributions show that for describing plasma of Hall thruster one needs to use a multilevel kinetic model, classic model can be used just for far plume region or for specific electron transitions.
Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O
2009-07-01
The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Magneto-optical visualization of three spatial components of inhomogeneous stray fields
NASA Astrophysics Data System (ADS)
Ivanov, V. E.
2012-08-01
The article deals with the physical principles of magneto-optical visualization (MO) of three spatial components of inhomogeneous stray fields with the help of FeCo metal indicator films in the longitudinal Kerr effect geometry. The inhomogeneous field is created by permanent magnets. Both p- and s-polarization light is used for obtaining MO images with their subsequent summing, subtracting and digitizing. As a result, the MO images and corresponding intensity coordinate dependences reflecting the distributions of the horizontal and vertical magnetization components in pure form have been obtained. Modeling of both the magnetization distribution in the indicator film and the corresponding MO images shows that corresponding to polar sensitivity the intensity is proportional to the normal field component, which permits normal field component mapping. Corresponding to longitudinal sensitivity, the intensity of the MO images reflects the angular distribution of the planar field component. MO images have singular points in which the planar component is zero and their movement under an externally homogeneous planar field permits obtaining of additional information on the two planar components of the field under study. The intensity distribution character in the vicinity of sources and sinks (singular points) remains the same under different orientations of the light incidence plane. The change of incident plane orientation by π/2 alters the distribution pattern in the vicinity of the saddle points.
Habitat classification modeling with incomplete data: Pushing the habitat envelope
Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.
2007-01-01
Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.
On the Electron Diffusion Region in Asymmetric Reconnection with a Guide Magnetic Field
NASA Technical Reports Server (NTRS)
Hesse, Michael; Liu, Yi-Hsin; Chen, Li-Jen; Bessho, Naoki; Kuznetsova, Masha; Birn, Joachim; Burch, James L.
2016-01-01
Particle-in-cell simulations in a 2.5-D geometry and analytical theory are employed to study the electron diffusion region in asymmetric reconnection with a guide magnetic field. The analysis presented here demonstrates that similar to the case without guide field, in-plane flow stagnation and null of the in-plane magnetic field are well separated. In addition, it is shown that the electric field at the local magnetic X point is again dominated by inertial effects, whereas it remains dominated by nongyrotropic pressure effects at the in-plane flow stagnation point. A comparison between local electron Larmor radii and the magnetic gradient scale lengths predicts that distribution should become nongyrotropic in a region enveloping both field reversal and flow stagnation points. This prediction is verified by an analysis of modeled electron distributions, which show clear evidence of mixing in the critical region.
Methods and limitations in radar target imagery
NASA Astrophysics Data System (ADS)
Bertrand, P.
An analytical examination of the reflectivity of radar targets is presented for the two-dimensional case of flat targets. A complex backscattering coefficient is defined for the amplitude and phase of the received field in comparison with the emitted field. The coefficient is dependent on the frequency of the emitted signal and the orientation of the target with respect to the transmitter. The target reflection is modeled in terms of the density of illumined, colored points independent from one another. The target therefore is represented as an infinite family of densities indexed by the observational angle. Attention is given to the reflectivity parameters and their distribution function, and to the conjunct distribution function for the color, position, and the directivity of bright points. It is shown that a fundamental ambiguity exists between the localization of the illumined points and the determination of their directivity and color.
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
Halford, Keith J.; Plume, Russell W.
2011-01-01
Assessing hydrologic effects of developing groundwater supplies in Snake Valley required numerical, groundwater-flow models to estimate the timing and magnitude of capture from streams, springs, wetlands, and phreatophytes. Estimating general water-table decline also required groundwater simulation. The hydraulic conductivity of basin fill and transmissivity of basement-rock distributions in Spring and Snake Valleys were refined by calibrating a steady state, three-dimensional, MODFLOW model of the carbonate-rock province to predevelopment conditions. Hydraulic properties and boundary conditions were defined primarily from the Regional Aquifer-System Analysis (RASA) model except in Spring and Snake Valleys. This locally refined model was referred to as the Great Basin National Park calibration (GBNP-C) model. Groundwater discharges from phreatophyte areas and springs in Spring and Snake Valleys were simulated as specified discharges in the GBNP-C model. These discharges equaled mapped rates and measured discharges, respectively. Recharge, hydraulic conductivity, and transmissivity were distributed throughout Spring and Snake Valleys with pilot points and interpolated to model cells with kriging in geologically similar areas. Transmissivity of the basement rocks was estimated because thickness is correlated poorly with transmissivity. Transmissivity estimates were constrained by aquifer-test results in basin-fill and carbonate-rock aquifers. Recharge, hydraulic conductivity, and transmissivity distributions of the GBNP-C model were estimated by minimizing a weighted composite, sum-of-squares objective function that included measurement and Tikhonov regularization observations. Tikhonov regularization observations were equations that defined preferred relations between the pilot points. Measured water levels, water levels that were simulated with RASA, depth-to-water beneath distributed groundwater and spring discharges, land-surface altitudes, spring discharge at Fish Springs, and changes in discharge on selected creek reaches were measurement observations. The effects of uncertain distributed groundwater-discharge estimates in Spring and Snake Valleys on transmissivity estimates were bounded with alternative models. Annual distributed groundwater discharges from Spring and Snake Valleys in the alternative models totaled 151,000 and 227,000 acre-feet, respectively and represented 20 percent differences from the 187,000 acre-feet per year that discharges from the GBNP-C model. Transmissivity estimates in the basin fill between Baker and Big Springs changed less than 50 percent between the two alternative models. Potential effects of pumping from Snake Valley were estimated with the Great Basin National Park predictive (GBNP-P) model, which is a transient groundwater-flow model. The hydraulic conductivity of basin fill and transmissivity of basement rock were the GBNP-C model distributions. Specific yields were defined from aquifer tests. Captures of distributed groundwater and spring discharges were simulated in the GBNP-P model using a combination of well and drain packages in MODFLOW. Simulated groundwater captures could not exceed measured groundwater-discharge rates. Four groundwater-development scenarios were investigated where total annual withdrawals ranged from 10,000 to 50,000 acre-feet during a 200-year pumping period. Four additional scenarios also were simulated that added the effects of existing pumping in Snake Valley. Potential groundwater pumping locations were limited to nine proposed points of diversion. Results are presented as maps of groundwater capture and drawdown, time series of drawdowns and discharges from selected wells, and time series of discharge reductions from selected springs and control volumes. Simulated drawdown propagation was attenuated where groundwater discharge could be captured. General patterns of groundwater capture and water-table declines were similar for all scenarios. Simulated drawdowns greater than 1 ft propagated outside of Spring and Snake Valleys after 200 years of pumping in all scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Quantitative comparison of the application accuracy between NDI and IGT tracking systems
NASA Astrophysics Data System (ADS)
Li, Qinghang; Zamorano, Lucia J.; Jiang, Charlie Z. W.; Gong, JianXing; Diaz, Fernando
1999-07-01
The application accuracy is a crucial factor for the stereotactic surgical localization system in which space digitization system is one of the most important part of equipment. In this study we compared the application accuracy of using the OPTOTRAK space digitization system (OPTOTRAK 3020, Northern Digital, Waterloo, CAN) and FlashPoint Model 3000 and 5000 3-D digitizer systems (FlashPoint Model 3000 and 5000, Image Guided Surgery Technology Inc., Boulder, CO 80301, USA) for interactive localization of intracranial lesions. A phantom was mounted with the implantable frameless marker system (Fischer- Leibinger, Freiburg, Germany) which randomly distributed markers on the surface of the phantom. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points were used as the deviation from the `true point'. The mean square root was calculated to show the sum of vectors. A paired t-test was used to analyze results. The results of the phantom showed that the mean square roots were 0.76 +/- 0.54 mm for the OPTOTRAK system and 1.23 +/- 0.53 mm for FlashPoint Model 3000 3-D digitizer system and 1.00 +/- 0.42 mm for FlashPoint Model 3000 3-D digitizer system in the 1 mm sections of CT scan. This preliminary results showed that there is no significant difference between two tracking systems. Both of them can be used for image guided surgery procedure.
Launching a Tethered Balloon in the Artic
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-08-14
Sandia atmospheric scientist Dari Dexheimer regularly flies tethered balloons out of Sandia’s dedicated Arctic airspace on Oliktok Point, the northernmost point of Alaska’s Prudhoe Bay. These 13-foot-tall balloons carry distributed temperature sensors to collect Arctic atmospheric temperature profiles, or the temperature of the air at different heights above the ground, among other atmospheric sensors. The data Sandia collects is critical for understanding Arctic clouds to inform global climate models.
Evaluating Air-Quality Models: Review and Outlook.
NASA Astrophysics Data System (ADS)
Weil, J. C.; Sykes, R. I.; Venkatram, A.
1992-10-01
Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.
Advanced model for the prediction of the neutron-rich fission product yields
NASA Astrophysics Data System (ADS)
Rubchenya, V. A.; Gorelov, D.; Jokinen, A.; Penttilä, H.; Äystö, J.
2013-12-01
The consistent models for the description of the independent fission product formation cross sections in the spontaneous fission and in the neutron and proton induced fission at the energies up to 100 MeV is developed. This model is a combination of new version of the two-component exciton model and a time-dependent statistical model for fusion-fission process with inclusion of dynamical effects for accurate calculations of nucleon composition and excitation energy of the fissioning nucleus at the scission point. For each member of the compound nucleus ensemble at the scission point, the primary fission fragment characteristics: kinetic and excitation energies and their yields are calculated using the scission-point fission model with inclusion of the nuclear shell and pairing effects, and multimodal approach. The charge distribution of the primary fragment isobaric chains was considered as a result of the frozen quantal fluctuations of the isovector nuclear matter density at the scission point with the finite neck radius. Model parameters were obtained from the comparison of the predicted independent product fission yields with the experimental results and with the neutron-rich fission product data measured with a Penning trap at the Accelerator Laboratory of the University of Jyväskylä (JYFLTRAP).
Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)
NASA Astrophysics Data System (ADS)
Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos
2017-04-01
Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.
A COMPUTATIONAL FRAMEWORK FOR EVALUATION OF NPS MANAGEMENT SCENARIOS: ROLE OF PARAMETER UNCERTAINTY
Utility of complex distributed-parameter watershed models for evaluation of the effectiveness of non-point source sediment and nutrient abatement scenarios such as Best Management Practices (BMPs) often follows the traditional {calibrate ---> validate ---> predict} procedure. Des...
NASA Astrophysics Data System (ADS)
Woodget, A.; Fyffe, C. L.; Kirkbride, M. P.; Deline, P.; Westoby, M.; Brock, B. W.
2017-12-01
Dirty ice areas (where debris cover is discontinuous) are often found on debris-covered glaciers above the limit of continuous debris and are important because they are areas of high melt and have been recognized as the locus of the identified upglacier increase in debris cover. The modelling of glacial ablation in areas of dirty ice is in its infancy and is currently restricted to theoretical studies. Glacial ablation is traditionally determined at point locations using stakes drilled into the ice. However, in areas of dirty ice, ablation is highly spatially variable, since debris a few centimetres thick is near the threshold between enhancing and reducing ablation. As a result, it is very difficult to ascertain if point ablation measurements are representative of ablation of the area surrounding the stake - making these measurements unsuitable for the validation of models of dirty ice ablation. This paper aims to quantify distributed ablation and its relationship to essential dirty ice characteristics with a view to informing the construction of dirty ice melt models. A novel approach to determine distributed ablation is presented which uses repeat aerial imagery acquired from a UAV (Unmanned Aerial Vehicle), processed using SfM (Structure from Motion) techniques, on an area of dirty ice on Miage Glacier, Italian Alps. A spatially continuous ablation map is presented, along with a correlation to the local debris characteristics. Furthermore, methods are developed which link ground truth data on the percentage debris cover, albedo and clast depth to the UAV imagery, allowing these characteristics to be determined for the entire study area, and used as model inputs. For example, debris thickness is determined through a field relationship with clast size, which is then correlated with image texture and point cloud roughness metrics derived from the UAV imagery. Finally, we evaluate the potential of our novel approach to lead to improved modelling of dirty ice ablation.
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee; Qualls, Garry; Slaba, Tony; Cucinotta, Francis A.
Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Qualls, Garry D.; Cucinotta, Francis A.
2008-01-01
Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.
Modeling diffuse phosphorus emissions to assist in best management practice designing
NASA Astrophysics Data System (ADS)
Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne
2010-05-01
A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.
NASA Astrophysics Data System (ADS)
Lan, Hengxing; Derek Martin, C.; Lim, C. H.
2007-02-01
Geographic information system (GIS) modeling is used in combination with three-dimensional (3D) rockfall process modeling to assess rockfall hazards. A GIS extension, RockFall Analyst (RA), which is capable of effectively handling large amounts of geospatial information relative to rockfall behaviors, has been developed in ArcGIS using ArcObjects and C#. The 3D rockfall model considers dynamic processes on a cell plane basis. It uses inputs of distributed parameters in terms of raster and polygon features created in GIS. Two major components are included in RA: particle-based rockfall process modeling and geostatistics-based rockfall raster modeling. Rockfall process simulation results, 3D rockfall trajectories and their velocity features either for point seeders or polyline seeders are stored in 3D shape files. Distributed raster modeling, based on 3D rockfall trajectories and a spatial geostatistical technique, represents the distribution of spatial frequency, the flying and/or bouncing height, and the kinetic energy of falling rocks. A distribution of rockfall hazard can be created by taking these rockfall characteristics into account. A barrier analysis tool is also provided in RA to aid barrier design. An application of these modeling techniques to a case study is provided. The RA has been tested in ArcGIS 8.2, 8.3, 9.0 and 9.1.
NASA Astrophysics Data System (ADS)
Huintjes, Eva; Sauter, Tobias; Krenscher, Tobias; Maussion, Fabien; Kropacek, Jan; Yang, Wei; Zhang, Guoshuai; Kang, Shichang; Buchroithner, Manfred; Scherer, Dieter; Schneider, Christoph
2013-04-01
In the remote and high-altitude mountain areas of the Tibetan Plateau, climate observations as well as glacier-wide mass and energy balance determinations are scarce. Therefore, the application of models to determine reliable information on mass balance and runoff is important. Simultaneously, these circumstances make it difficult to evaluate the models. Since 2009, we operate an automatic weather station (AWS) in the ablation zone of Zhadang Glacier (5.665 m a.s.l.). The glacier is easily accessible. It is situated in the southern-central part of the Tibetan Plateau (30.5°N) in the Nam Co drainage basin and ranges between 5.400 and 5.900 m a.s.l. Based on these measurements over 2009-2012, we run and evaluate a physically based, distributed energy and mass balance model. The applied model couples an energy balance to a multilayer snow model and therefore accounts for subsurface processes like refreezing, subsurface melt and densification of the snowpack. First, the model is evaluated at point scale against measurements from the AWS. The results show that modelled accumulation and ablation patterns reproduce the observed changes in surface height very well. To evaluate the distributed model, we use daily images of a time lapse camera system installed nearby the glacier over 2010-2012. Therefore the non calibrated slope images had to be orthorectified using ground control points measured during field campaigns. The temporally and spatially highly resolved time series allows a detailed evaluation of the distributed energy balance model by analyzing the spatial and temporal heterogeneity of the snow line during the ablation season. First results show that the model captures the observed spatial heterogeneity of melt on the glacier surface. Subsequently to the evaluation the model will be applied on several glaciers and small ice caps in remote areas on the Tibetan Plateau to determine the linkages between climate fluctuations and glacier variability. The work is part of research projects funded by the DFG Priority Programme 1372: "Tibetan Plateau: Formation-Climate-Ecosystems" (TiP) and the BMBF research program "Central Asia and Tibet: Monsoon dynamics and geo-ecosystems" (CAME).
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
Liu, An; Wijesiri, Buddhi; Hong, Nian; Zhu, Panfeng; Egodawatta, Prasanna; Goonetilleke, Ashantha
2018-05-08
Road deposited pollutants (build-up) are continuously re-distributed by external factors such as traffic and wind turbulence, influencing stormwater runoff quality. However, current stormwater quality modelling approaches do not account for the re-distribution of pollutants. This undermines the accuracy of stormwater quality predictions, constraining the design of effective stormwater treatment measures. This study, using over 1000 data points, developed a Bayesian Network modelling approach to investigate the re-distribution of pollutant build-up on urban road surfaces. BTEX, which are a group of highly toxic pollutants, was the case study pollutants. Build-up sampling was undertaken in Shenzhen, China, using a dry and wet vacuuming method. The research outcomes confirmed that the vehicle type and particle size significantly influence the re-distribution of particle-bound BTEX. Compared to heavy-duty traffic in commercial areas, light-duty traffic dominates the re-distribution of particles of all size ranges. In industrial areas, heavy-duty traffic re-distributes particles >75 μm, and light-duty traffic re-distributes particles <75 μm. In residential areas, light-duty traffic re-distributes particles >300 μm and <75 μm and heavy-duty traffic re-distributes particles in the 300-150 μm range. The study results provide important insights to improve stormwater quality modelling and the interpretation of modelling outcomes, contributing to safeguard the urban water environment. Copyright © 2018 Elsevier B.V. All rights reserved.
Revisiting the Tale of Hercules: How Stars Orbiting the Lagrange Points Visit the Sun
NASA Astrophysics Data System (ADS)
Pérez-Villegas, Angeles; Portail, Matthieu; Wegg, Christopher; Gerhard, Ortwin
2017-05-01
We propose a novel explanation for the Hercules stream consistent with recent measurements of the extent and pattern speed of the Galactic bar. We have adapted a made-to-measure dynamical model tailored for the Milky Way to investigate the kinematics of the solar neighborhood (SNd). The model matches the 3D density of the red clump giant stars (RCGs) in the bulge and bar as well as stellar kinematics in the inner Galaxy, with a pattern speed of 39 km s-1 kpc-1. Cross-matching this model with the Gaia DR1 TGAS data combined with RAVE and LAMOST radial velocities, we find that the model naturally predicts a bimodality in the U-V-velocity distribution for nearby stars which is in good agreement with the Hercules stream. In the model, the Hercules stream is made of stars orbiting the Lagrange points of the bar which move outward from the bar’s corotation radius to visit the SNd. While the model is not yet a quantitative fit of the velocity distribution, the new picture naturally predicts that the Hercules stream is more prominent inward from the Sun and nearly absent only a few 100 pc outward of the Sun, and plausibly explains that Hercules is prominent in old and metal-rich stars.
Risky Group Decision-Making Method for Distribution Grid Planning
NASA Astrophysics Data System (ADS)
Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang
2015-12-01
With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.
NASA Astrophysics Data System (ADS)
Dudek, Mirosław R.; Mleczko, Józef
Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.
NASA Astrophysics Data System (ADS)
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
NASA Technical Reports Server (NTRS)
Cen, Renyue
1994-01-01
The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
Kinematics, partitioning and the relationship between velocity and strain in shear zones
NASA Astrophysics Data System (ADS)
Murphy, Justin James
Granite Point, southeast Washington State, captures older distributed deformation deflected by younger localized deformation. This history agrees with mathematical modeling completed by Watkinson and Patton (2005; 2007 in prep). This model suggests that distributed strain occurs at a lower energy threshold than localized strain and predicts deformation histories similar to Granite Point. Ductile shear zones at Granite Point define a zone of deformation where strain is partitioned and localized into at least ten sub parallel shear zones with sinistral, west side down shear sense. Can the relative movement of the boundaries of this partitioned system be reconstructed? Can partitioning be resolved from a distributed style of deformation? The state of strain and kinematics of actively deforming zones was studied by relating the velocity field to strain. The Aleutian Arc, Alaska and central Walker Lane, Nevada were chosen because they have a wealth of geologic data and are recognized examples of obliquely deforming zones. The graphical construction developed by Declan De Paor is ideally suited for this application because it provides a spatially referenced visualization of the relationship between velocity and strain. The construction of De Paor reproduces the observed orientation of strain in the Aleutian Arc, however, the spatial distribution of GPS stations suggest a component of partitioning. Partitioning does not provide a unique solution and cannot be differentiated from a combination of partitioning and distributed strain. In the central Walker Lane, strain trajectories can be reproduced at the domain scale. Furthermore, the effect of anisotropy from Paleozoic through Cenozoic crustal structure, which breaks the regional strain field into pure shear and simple shear dominated transtension can be detected. Without GPS velocities to document strictly coaxial strain, the strain orientation should not be taken as the velocity orientation. The strain recorded at Granite Point should not be used to reconstruct the relative movement of the boundaries because the strain direction may not be parallel to the velocity orientation. Kinematic reconstructions of obliquely deforming zones that assume a palaeo-velocity orientation equal to the measured orientation of finite strain may not accurately reflect the deviation between velocity and strain.
Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans
2007-02-01
Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.
Unleashing spatially distributed ecohydrology modeling using Big Data tools
NASA Astrophysics Data System (ADS)
Miles, B.; Idaszak, R.
2015-12-01
Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.
Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar
2017-02-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitani, Akira; Tsubota, Makoto
2006-07-01
The energy spectrum of decaying quantum turbulence at T=0 obeys Kolmogorov's law. In addition to this, recent studies revealed that the vortex-length distribution (VLD), meaning the size distribution of the vortices, in decaying Kolmogorov quantum turbulence also obeys a power law. This power-law VLD suggests that the decaying turbulence has scale-free structure in real space. Unfortunately, however, there has been no practical study that answers the following important question: why can quantum turbulence acquire a scale-free VLD? We propose here a model to study the origin of the power law of the VLD from a generic point of view. Themore » nature of quantized vortices allows one to describe the decay of quantum turbulence with a simple model that is similar to the Barabasi-Albert model, which explains the scale-invariance structure of large networks. We show here that such a model can reproduce the power law of the VLD well.« less
Estimating indices of range shifts in birds using dynamic models when detection is imperfect
Clement, Matthew J.; Hines, James E.; Nichols, James D.; Pardieck, Keith L.; Ziolkowski, David J.
2016-01-01
There is intense interest in basic and applied ecology about the effect of global change on current and future species distributions. Projections based on widely used static modeling methods implicitly assume that species are in equilibrium with the environment and that detection during surveys is perfect. We used multiseason correlated detection occupancy models, which avoid these assumptions, to relate climate data to distributional shifts of Louisiana Waterthrush in the North American Breeding Bird Survey (BBS) data. We summarized these shifts with indices of range size and position and compared them to the same indices obtained using more basic modeling approaches. Detection rates during point counts in BBS surveys were low, and models that ignored imperfect detection severely underestimated the proportion of area occupied and slightly overestimated mean latitude. Static models indicated Louisiana Waterthrush distribution was most closely associated with moderate temperatures, while dynamic occupancy models indicated that initial occupancy was associated with diurnal temperature ranges and colonization of sites was associated with moderate precipitation. Overall, the proportion of area occupied and mean latitude changed little during the 1997–2013 study period. Near-term forecasts of species distribution generated by dynamic models were more similar to subsequently observed distributions than forecasts from static models. Occupancy models incorporating a finite mixture model on detection – a new extension to correlated detection occupancy models – were better supported and may reduce bias associated with detection heterogeneity. We argue that replacing phenomenological static models with more mechanistic dynamic models can improve projections of future species distributions. In turn, better projections can improve biodiversity forecasts, management decisions, and understanding of global change biology.
Equilibrium charge distribution on a finite straight one-dimensional wire
NASA Astrophysics Data System (ADS)
Batle, Josep; Ciftja, Orion; Abdalla, Soliman; Elhoseny, Mohamed; Alkhambashi, Majid; Farouk, Ahmed
2017-09-01
The electrostatic properties of uniformly charged regular bodies are prominently discussed on college-level electromagnetism courses. However, one of the most basic problems of electrostatics that deals with how a continuous charge distribution reaches equilibrium is rarely mentioned at this level. In this work we revisit the problem of equilibrium charge distribution on a straight one-dimensional (1D) wire with finite length. The majority of existing treatments in the literature deal with the 1D wire as a limiting case of a higher-dimensional structure that can be treated analytically for a Coulomb interaction potential between point charges. Surprisingly, different models (for instance, an ellipsoid or a cylinder model) may lead to different results, thus there is even some ambiguity on whether the problem is well-posed. In this work we adopt a different approach where we do not start with any higher-dimensional body that reduces to a 1D wire in the appropriate limit. Instead, our starting point is the obvious one, a finite straight 1D wire that contains charge. However, the new tweak in the model is the assumption that point charges interact with each other via a non-Coulomb power-law interaction potential. This potential is well-behaved, allows exact analytical results and approaches the standard Coulomb interaction potential as a limit. The results originating from this approach suggest that the equilibrium charge distribution for a finite straight 1D wire is a uniform charge density when the power-law interaction potential approaches the Coulomb interaction potential as a suitable limit. We contrast such a finding to results obtained using a different regularised logarithmic interaction potential which allows exact treatment in 1D. The present self-contained material may be of interest to instructors teaching electromagnetism as well as students who will discover that simple-looking problems may sometimes pose important scientific challenges.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
NASA Astrophysics Data System (ADS)
Yang, Yanqiu; Yu, Lin; Zhang, Yixin
2017-04-01
A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verheest, Frank, E-mail: frank.verheest@ugent.be; School of Chemistry and Physics, University of KwaZulu-Natal, Durban 4000; Hellberg, Manfred A., E-mail: hellberg@ukzn.ac.za
The propagation of arbitrary amplitude electron-acoustic solitons and double layers is investigated in a plasma containing cold positive ions, cool adiabatic and hot isothermal electrons, with the retention of full inertial effects for all species. For analytical tractability, the resulting Sagdeev pseudopotential is expressed in terms of the hot electron density, rather than the electrostatic potential. The existence domains for Mach numbers and hot electron densities clearly show that both rarefactive and compressive solitons can exist. Soliton limitations come from the cool electron sonic point, followed by the hot electron sonic point, until a range of rarefactive double layers occurs.more » Increasing the relative cool electron density further yields a switch to compressive double layers, which ends when the model assumptions break down. These qualitative results are but little influenced by variations in compositional parameters. A comparison with a Boltzmann distribution for the hot electrons shows that only the cool electron sonic point limit remains, giving higher maximum Mach numbers but similar densities, and a restricted range in relative hot electron density before the model assumptions are exceeded. The Boltzmann distribution can reproduce neither the double layer solutions nor the switch in rarefactive/compressive character or negative/positive polarity.« less
Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response
NASA Astrophysics Data System (ADS)
Floerchinger, Stefan; Wiedemann, Urs Achim
2014-08-01
An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi
2017-01-01
Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
Exact extreme-value statistics at mixed-order transitions.
Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David
2016-05-01
We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.
Fan, Yuting; Li, Jianqiang; Xu, Kun; Chen, Hao; Lu, Xun; Dai, Yitang; Yin, Feifei; Ji, Yuefeng; Lin, Jintong
2013-09-09
In this paper, we analyze the performance of IEEE 802.11 distributed coordination function in simulcast radio-over-fiber-based distributed antenna systems (RoF-DASs) where multiple remote antenna units (RAUs) are connected to one wireless local-area network (WLAN) access point (AP) with different-length fiber links. We also present an analytical model to evaluate the throughput of the systems in the presence of both the inter-RAU hidden-node problem and fiber-length difference effect. In the model, the unequal delay induced by different fiber length is involved both in the backoff stage and in the calculation of Ts and Tc, which are the period of time when the channel is sensed busy due to a successful transmission or a collision. The throughput performances of WLAN-RoF-DAS in both basic access and request to send/clear to send (RTS/CTS) exchange modes are evaluated with the help of the derived model.
Exact Performance Analysis of Two Distributed Processes with Multiple Synchronization Points.
1987-05-01
number of processes with straight-line sequences of semaphore operations . We use the geometric model for performance analysis, in contrast to proving...distribution unlimited. 4. PERFORMING’*ORGANIZATION REPORT NUMBERS) 5. MONITORING ORGANIZATION REPORT NUMB CS-TR-1845 6a. NAME OF PERFORMING ORGANIZATION 6b...OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATIO U University of Maryland (If applicable) Office of Naval Research N/A 6c. ADDRESS (City, State, and
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
HBT correlations and charge ratios in multiple production of pions
NASA Astrophysics Data System (ADS)
Bialas, A.; Zalewski, K.
1999-01-01
The influence of the HTB effect on the multiplicity distribution and charge ratios of independently produced pions is studied. It is shown that, for a wide class of models, there is a critical point, where the average number of pions becomes very large and the multiplicity distribution becomes very broad. In this regime unusual charge ratios (“centauros”, “anticentauros”) are strongly enhanced. The prospects for reaching this regime are discussed.
NASA Technical Reports Server (NTRS)
Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.
1999-01-01
Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.
Li, Q; Zamorano, L; Jiang, Z; Gong, J X; Pandya, A; Perez, R; Diaz, F
1999-01-01
Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.
2009-04-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.
Space Object Classification and Characterization Via Multiple Model Adaptive Estimation
2014-07-14
BRDF ) which models light distribution scattered from the surface due to the incident light. The BRDF at any point on the surface is a function of two...uu B vu B nu obs I u sun I u I hu (b) Reflection Geometry Fig. 2: Reflection Geometry and Space Object Shape Model of the BRDF is ρdiff(i...Space Object Classification and Characterization Via Multiple Model Adaptive Estimation Richard Linares Director’s Postdoctoral Fellow Space Science
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.
2012-01-01
State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow assimilation at any adjustment scale produces streamflow predictions with a spatial correlation structure more consistent with that of streamflow observations. We also describe diagnosing the complexity of the assimilation problem using the spatial correlation information associated with the streamflow process, and discuss the effect of timing errors in a simulated hydrograph on the performance of the data assimilation procedure.
2017-09-01
ADCP locations used for model calibration. ......................................................................... 12 Figure 4-3. Sample water...Example of fine sediment sample [Set d, Sample B30]. (B) Example of coarse sediment sample [Set d, sample B05...Turning Basin average sediment size distribution curve. ................................................... 21 Figure 5-5. Turning Basin average size
Variable threshold algorithm for division of labor analyzed as a dynamical system.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro
2014-12-01
Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
Distribution patterns of mercury in Lakes and Rivers of northeastern North America
Dennis, Ian F.; Clair, Thomas A.; Driscoll, Charles T.; Kamman, Neil; Chalmers, Ann T.; Shanley, Jamie; Norton, Stephen A.; Kahl, Steve
2005-01-01
We assembled 831 data points for total mercury (Hgt) and 277 overlapping points for methyl mercury (CH3Hg+) in surface waters from Massachussetts, USA to the Island of Newfoundland, Canada from State, Provincial, and Federal government databases. These geographically indexed values were used to determine: (a) if large-scale spatial distribution patterns existed and (b) whether there were significant relationships between the two main forms of aquatic Hg as well as with total organic carbon (TOC), a well know complexer of metals. We analyzed the catchments where samples were collected using a Geographical Information System (GIS) approach, calculating catchment sizes, mean slope, and mean wetness index. Our results show two main spatial distribution patterns. We detected loci of high Hgt values near urbanized regions of Boston MA and Portland ME. However, except for one unexplained exception, the highest Hgt and CH3Hg+ concentrations were located in regions far from obvious point sources. These correlated to topographically flat (and thus wet) areas that we relate to wetland abundances. We show that aquatic Hgt and CH3Hg+ concentrations are generally well correlated with TOC and with each other. Over the region, CH3Hg+ concentrations are typically approximately 15% of Hgt. There is an exception in the Boston region where CH3Hg+ is low compared to the high Hgt values. This is probably due to the proximity of point sources of inorganic Hg and a lack of wetlands. We also attempted to predict Hg concentrations in water with statistical models using catchment features as variables. We were only able to produce statistically significant predictive models in some parts of regions due to the lack of suitable digital information, and because data ranges in some regions were too narrow for meaningful regression analyses.
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2009-10-01
The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.
Maslov, Mikhail Y.; Edelman, Elazer R.; Pezone, Matthew J.; Wei, Abraham E.; Wakim, Matthew G.; Murray, Michael R.; Tsukada, Hisashi; Gerogiannis, Iraklis S.; Groothuis, Adam; Lovich, Mark A.
2014-01-01
Prior studies in small mammals have shown that local epicardial application of inotropic compounds drives myocardial contractility without systemic side effects. Myocardial capillary blood flow, however, may be more significant in larger species than in small animals. We hypothesized that bulk perfusion in capillary beds of the large mammalian heart enhances drug distribution after local release, but also clears more drug from the tissue target than in small animals. Epicardial (EC) drug releasing systems were used to apply epinephrine to the anterior surface of the left heart of swine in either point-sourced or distributed configurations. Following local application or intravenous (IV) infusion at the same dose rates, hemodynamic responses, epinephrine levels in the coronary sinus and systemic circulation, and drug deposition across the ventricular wall, around the circumference and down the axis, were measured. EC delivery via point-source release generated transmural epinephrine gradients directly beneath the site of application extending into the middle third of the myocardial thickness. Gradients in drug deposition were also observed down the length of the heart and around the circumference toward the lateral wall, but not the interventricular septum. These gradients extended further than might be predicted from simple diffusion. The circumferential distribution following local epinephrine delivery from a distributed source to the entire anterior wall drove drug toward the inferior wall, further than with point-source release, but again, not to the septum. This augmented drug distribution away from the release source, down the axis of the left ventricle, and selectively towards the left heart follows the direction of capillary perfusion away from the anterior descending and circumflex arteries, suggesting a role for the coronary circulation in determining local drug deposition and clearance. The dominant role of the coronary vasculature is further suggested by the elevated drug levels in the coronary sinus effluent. Indeed, plasma levels, hemodynamic responses, and myocardial deposition remote from the point of release were similar following local EC or IV delivery. Therefore, the coronary vasculature shapes the pharmacokinetics of local myocardial delivery of small catecholamine drugs in large animal models. Optimal design of epicardial drug delivery systems must consider the underlying bulk capillary perfusion currents within the tissue to deliver drug to tissue targets and may favor therapeutic molecules with better potential retention in myocardial tissue. PMID:25234821
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Song, Xuehang; Ye, Ming
A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less
NASA Astrophysics Data System (ADS)
Abaimov, Sergey G.
The concept of self-organized criticality is associated with scale-invariant, fractal behavior; this concept is also applicable to earthquake systems. It is known that the interoccurrent frequency-size distribution of earthquakes in a region is scale-invariant and obeys the Gutenberg-Richter power-law dependence. Also, the interoccurrent time-interval distribution is known to obey Poissonian statistics excluding aftershocks. However, to estimate the hazard risk for a region it is necessary to know also the recurrent behavior of earthquakes at a given point on a fault. This behavior has been investigated in the literature, however, major questions remain unresolved. The reason is the small number of earthquakes in observed sequences. To overcome this difficulty this research utilizes numerical simulations of a slider-block model and a sand-pile model. Also, experimental observations of creep events on the creeping section of the San Andreas fault are processed and sequences up to 100 events are studied. Then the recurrent behavior of earthquakes at a given point on a fault or at a given fault is investigated. It is shown that both the recurrent frequency-size and the time-interval behaviors of earthquakes obey the Weibull distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Ji, Haoran; Wang, Chengshan
Distributed generators (DGs) including photovoltaic panels (PVs) have been integrated dramatically in active distribution networks (ADNs). Due to the strong volatility and uncertainty, the high penetration of PV generation immensely exacerbates the conditions of voltage violation in ADNs. However, the emerging flexible interconnection technology based on soft open points (SOPs) provides increased controllability and flexibility to the system operation. For fully exploiting the regulation ability of SOPs to address the problems caused by PV, this paper proposes a robust optimization method to achieve the robust optimal operation of SOPs in ADNs. A two-stage adjustable robust optimization model is built tomore » tackle the uncertainties of PV outputs, in which robust operation strategies of SOPs are generated to eliminate the voltage violations and reduce the power losses of ADNs. A column-and-constraint generation (C&CG) algorithm is developed to solve the proposed robust optimization model, which are formulated as second-order cone program (SOCP) to facilitate the accuracy and computation efficiency. Case studies on the modified IEEE 33-node system and comparisons with the deterministic optimization approach are conducted to verify the effectiveness and robustness of the proposed method.« less
NASA Astrophysics Data System (ADS)
Petrie, Gordon; Pevtsov, Alexei; Schwarz, Andrew; DeRosa, Marc
2018-06-01
The solar photospheric magnetic flux distribution is key to structuring the global solar corona and heliosphere. Regular full-disk photospheric magnetogram data are therefore essential to our ability to model and forecast heliospheric phenomena such as space weather. However, our spatio-temporal coverage of the photospheric field is currently limited by our single vantage point at/near Earth. In particular, the polar fields play a leading role in structuring the large-scale corona and heliosphere, but each pole is unobservable for {>} 6 months per year. Here we model the possible effect of full-disk magnetogram data from the Lagrange points L4 and L5, each extending longitude coverage by 60°. Adding data also from the more distant point L3 extends the longitudinal coverage much further. The additional vantage points also improve the visibility of the globally influential polar fields. Using a flux-transport model for the solar photospheric field, we model full-disk observations from Earth/L1, L3, L4, and L5 over a solar cycle, construct synoptic maps using a novel weighting scheme adapted for merging magnetogram data from multiple viewpoints, and compute potential-field models for the global coronal field. Each additional viewpoint brings the maps and models into closer agreement with the reference field from the flux-transport simulation, with particular improvement at polar latitudes, the main source of the fast solar wind.
NASA Astrophysics Data System (ADS)
Trung, Ha Duyen
2017-12-01
In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.
A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone
NASA Technical Reports Server (NTRS)
Norum, T. D.
1974-01-01
The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.
Coordinated XTE/EUVE Observations of Algol
NASA Technical Reports Server (NTRS)
Stern, Robert A.
1997-01-01
EUVE, ASCA, and XTE observed the eclipsing binary Algol (Beta Per) from 1-7 Feb. 96. The coordinated observation covered approximately 2 binary orbits of the system, with a net exposure of approximately 160 ksec for EUVE, 40 ksec for ASCA (in 4 pointing), and 90 ksec for XTE (in 45 pointings). We discuss results of modeling the combined EUVE, ASCA, and XTE data using continuous differential emission measure distributions, and provide constraints on the Fe abundance in the Algol system.
Pedersen, Ulrik B; Karagiannis-Voules, Dimitrios-Alexios; Midzi, Nicholas; Mduluza, Tkafira; Mukaratirwa, Samson; Fensholt, Rasmus; Vennervald, Birgitte J; Kristensen, Thomas K; Vounatsou, Penelope; Stensgaard, Anna-Sofie
2017-05-08
Temperature, precipitation and humidity are known to be important factors for the development of schistosome parasites as well as their intermediate snail hosts. Climate therefore plays an important role in determining the geographical distribution of schistosomiasis and it is expected that climate change will alter distribution and transmission patterns. Reliable predictions of distribution changes and likely transmission scenarios are key to efficient schistosomiasis intervention-planning. However, it is often difficult to assess the direction and magnitude of the impact on schistosomiasis induced by climate change, as well as the temporal transferability and predictive accuracy of the models, as prevalence data is often only available from one point in time. We evaluated potential climate-induced changes on the geographical distribution of schistosomiasis in Zimbabwe using prevalence data from two points in time, 29 years apart; to our knowledge, this is the first study investigating this over such a long time period. We applied historical weather data and matched prevalence data of two schistosome species (Schistosoma haematobium and S. mansoni). For each time period studied, a Bayesian geostatistical model was fitted to a range of climatic, environmental and other potential risk factors to identify significant predictors that could help us to obtain spatially explicit schistosomiasis risk estimates for Zimbabwe. The observed general downward trend in schistosomiasis prevalence for Zimbabwe from 1981 and the period preceding a survey and control campaign in 2010 parallels a shift towards a drier and warmer climate. However, a statistically significant relationship between climate change and the change in prevalence could not be established.
One-point fluctuation analysis of the high-energy neutrino sky
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feyereisen, Michael R.; Ando, Shin'ichiro; Tamborra, Irene, E-mail: m.r.feyereisen@uva.nl, E-mail: tamborra@nbi.ku.dk, E-mail: s.ando@uva.nl
2017-03-01
We perform the first one-point fluctuation analysis of the high-energy neutrino sky. This method reveals itself to be especially suited to contemporary neutrino data, as it allows to study the properties of the astrophysical components of the high-energy flux detected by the IceCube telescope, even with low statistics and in the absence of point source detection. Besides the veto-passing atmospheric foregrounds, we adopt a simple model of the high-energy neutrino background by assuming two main extra-galactic components: star-forming galaxies and blazars. By leveraging multi-wavelength data from Herschel and Fermi , we predict the spectral and anisotropic probability distributions for theirmore » expected neutrino counts in IceCube. We find that star-forming galaxies are likely to remain a diffuse background due to the poor angular resolution of IceCube, and we determine an upper limit on the number of shower events that can reasonably be associated to blazars. We also find that upper limits on the contribution of blazars to the measured flux are unfavourably affected by the skewness of the blazar flux distribution. One-point event clustering and likelihood analyses of the IceCube HESE data suggest that this method has the potential to dramatically improve over more conventional model-based analyses, especially for the next generation of neutrino telescopes.« less
Extensions to the visual predictive check to facilitate model performance evaluation.
Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert
2008-04-01
The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Probing dim point sources in the inner Milky Way using PCAT
NASA Astrophysics Data System (ADS)
Daylan, Tansu; Portillo, Stephen K. N.; Finkbeiner, Douglas P.
2017-01-01
Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of dim point sources just below the 3FGL flux limit. In order to be able to draw conclusions about the flux distribution of point sources at the dim end, we employ a Bayesian trans-dimensional MCMC framework by taking samples from the space of catalogs consistent with the observed gamma-ray emission in the inner Milky Way. The software implementation, PCAT (Probabilistic Cataloger), is designed to efficiently explore that catalog space in the crowded field limit such as in the galactic plane, where the model PSF, point source positions and fluxes are highly degenerate. We thus generate fair realizations of the underlying MSP population in the inner galaxy and constrain the population characteristics such as the radial and flux distribution of such sources.
A novel method for the evaluation of uncertainty in dose-volume histogram computation.
Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas
2008-03-15
Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.
Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.
2010-11-30
The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, anmore » intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.« less
Nagy, A; Bodò, G; Dyson, S J; Compostella, F; Barr, A R S
2010-09-01
Evidence-based information is limited on distribution of local anaesthetic solution following perineural analgesia of the palmar (Pa) and palmar metacarpal (PaM) nerves in the distal aspect of the metacarpal (Mc) region ('low 4-point nerve block'). To demonstrate the potential distribution of local anaesthetic solution after a low 4-point nerve block using a radiographic contrast model. A radiodense contrast medium was injected subcutaneously over the medial or the lateral Pa nerve at the junction of the proximal three-quarters and distal quarter of the Mc region (Pa injection) and over the ipsilateral PaM nerve immediately distal to the distal aspect of the second or fourth Mc bones (PaM injection) in both forelimbs of 10 mature horses free from lameness. Radiographs were obtained 0, 10 and 20 min after injection and analysed subjectively and objectively. Methylene blue and a radiodense contrast medium were injected in 20 cadaver limbs using the same techniques. Radiographs were obtained and the limbs dissected. After 31/40 (77.5%) Pa injections, the pattern of the contrast medium suggested distribution in the neurovascular bundle. There was significant proximal diffusion with time, but the main contrast medium patch never progressed proximal to the mid-Mc region. The radiological appearance of 2 limbs suggested that contrast medium was present in the digital flexor tendon sheath (DFTS). After PaM injections, the contrast medium was distributed diffusely around the injection site in the majority of the limbs. In cadaver limbs, after Pa injections, the contrast medium and the dye were distributed in the neurovascular bundle in 8/20 (40%) limbs and in the DFTS in 6/20 (30%) of limbs. After PaM injections, the contrast and dye were distributed diffusely around the injection site in 9/20 (45%) limbs and showed diffuse and tubular distribution in 11/20 (55%) limbs. Proximal diffusion of local anaesthetic solution after a low 4-point nerve block is unlikely to be responsible for decreasing lameness caused by pain in the proximal Mc region. The DFTS may be penetrated inadvertently when performing a low 4-point nerve block.
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...
2018-01-01
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
Ecological change points: The strength of density dependence and the loss of history.
Ponciano, José M; Taper, Mark L; Dennis, Brian
2018-05-01
Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.
An Emprical Point Error Model for Tls Derived Point Clouds
NASA Astrophysics Data System (ADS)
Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin
2016-06-01
The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.
Some analysis on the diurnal variation of rainfall over the Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gill, T.; Perng, S.; Hughes, A.
1981-01-01
Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
Iterative updating of model error for Bayesian inversion
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew
2018-02-01
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
Probabilistic forecasting for extreme NO2 pollution episodes.
Aznarte, José L
2017-10-01
In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Three-dimensional eddy current solution of a polyphase machine test model (abstract)
NASA Astrophysics Data System (ADS)
Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado
1994-05-01
This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis.
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm(3) and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.
Evaluation of a spatially-distributed Thornthwaite water-balance model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lough, J.A.
1993-03-01
A small watershed of low relief in coastal New Hampshire was divided into hydrologic sub-areas in a geographic information system on the basis of soils, sub-basins and remotely-sensed landcover. Three variables were spatially modeled for input to 49 individual water-balances: available water content of the root zone, water input and potential evapotranspiration (PET). The individual balances were weight-summed to generate the aggregate watershed-balance, which saw 9% (48--50 mm) less annual actual-evapotranspiration (AET) compared to a lumped approach. Analysis of streamflow coefficients suggests that the spatially-distributed approach is more representative of the basin dynamics. Variation of PET by landcover accounted formore » the majority of the 9% AET reduction. Variation of soils played a near-negligible role. As a consequence of the above points, estimates of landcover proportions and annual PET by landcover are sufficient to correct a lumped water-balance in the Northeast. If remote sensing is used to estimate the landcover area, a sensor with a high spatial resolution is required. Finally, while the lower Thornthwaite model has conceptual limitations for distributed application, the upper Thornthwaite model is highly adaptable to distributed problems and may prove useful in many earth-system models.« less
McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael
2010-04-01
Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
Direct Measurements of Interplanetary Dust Particles in the Vicinity of Earth
NASA Technical Reports Server (NTRS)
McCracken, C. W.; Alexander, W. M.; Dubin, M.
1961-01-01
The direct measurements made by the Explorer VIII satellite provide the first sound basis for analyzing all available direct measurements of the distribution of interplanetary dust particles. The model average distribution curve established by such an analysis departs significantly from that predicted by the (uncertain) extrapolation of results from meteor observations. A consequence of this difference is that the daily accretion of interplanetary particulate matter by the earth is now considered to be mainly dust particles of the direct measurements range of particle size. Almost all the available direct measurements obtained with microphone systems on rockets, satellites, and spacecraft fit directly on the distribution curve defined by Explorer VIII data. The lack of reliable datum points departing significantly from the model average distribution curve means that available direct measurements show no discernible evidence of an appreciable geocentric concentration of interplanetary dust particles.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
Data-Driven Residential Load Modeling and Validation in GridLAB-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotseff, Peter; Lundstrom, Blake
Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less
Synoptic, Global Mhd Model For The Solar Corona
NASA Astrophysics Data System (ADS)
Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.
2007-05-01
The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.
NASA Astrophysics Data System (ADS)
Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il
2016-05-01
In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.
NASA Astrophysics Data System (ADS)
Chen, Lei; Xu, Jiajia; Wang, Guobo; Liu, Hongbin; Zhai, Limei; Li, Shuang; Sun, Cheng; Shen, Zhenyao
2018-07-01
Hydrological and non-point source pollution (H/NPS) predictions in ungagged basins have become the key problem for watershed studies, especially for those large-scale catchments. However, few studies have explored the comprehensive impacts of rainfall data scarcity on H/NPS predictions. This study focused on: 1) the effects of rainfall spatial scarcity (by removing 11%-67% of stations based on their locations) on the H/NPS results; and 2) the impacts of rainfall temporal scarcity (10%-60% data scarcity in time series); and 3) the development of a new evaluation method that incorporates information entropy. A case study was undertaken using the Soil and Water Assessment Tool (SWAT) in a typical watershed in China. The results of this study highlighted the importance of critical-site rainfall stations that often showed greater influences and cross-tributary impacts on the H/NPS simulations. Higher missing rates above a certain threshold as well as missing locations during the wet periods resulted in poorer simulation results. Compared to traditional indicators, information entropy could serve as a good substitute because it reflects the distribution of spatial variability and the development of temporal heterogeneity. This paper reports important implications for the application of Distributed Hydrological Models and Semi-distributed Hydrological Models, as well as for the optimal design of rainfall gauges among large basins.
Numerical analysis of tailored sheets to improve the quality of components made by SPIF
NASA Astrophysics Data System (ADS)
Gagliardi, Francesco; Ambrogio, Giuseppina; Cozza, Anna; Pulice, Diego; Filice, Luigino
2018-05-01
In this paper, the authors pointed out a study on the profitable combination of forming techniques. More in detail, the attention has been put on the combination of the single point incremental forming (SPIF) and, generally, speaking, of an additional process that can lead to a material thickening on the initial blank considering the local thinning which the sheets undergo at. Focalizing the attention of the research on the excessive thinning of parts made by SPIF, a hybrid approach can be thought as a viable solution to reduce the not homogeneous thickness distribution of the sheet. In fact, the basic idea is to work on a blank previously modified by a deformation step performed, for instance, by forming, additive or subtractive processes. To evaluate the effectiveness of this hybrid solution, a FE numerical model has been defined to analyze the thickness variation on tailored sheets incrementally formed optimizing the material distribution according to the shape to be manufactured. Simulations based on the explicit formulation have been set up for the model implementation. The mechanical properties of the sheet material have been taken in literature and a frustum of cone as benchmark profile has been considered for the performed analysis. The outcomes of numerical model have been evaluated in terms of both maximum thinning and final thickness distribution. The feasibility of the proposed approach will be deeply detailed in the paper.
Modeling a space-based quantum link that includes an adaptive optics system
NASA Astrophysics Data System (ADS)
Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.
2017-10-01
Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.
NASA Astrophysics Data System (ADS)
Divine, D. V.; Godtliebsen, F.; Rue, H.
2012-01-01
The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.
How human drivers control their vehicle
NASA Astrophysics Data System (ADS)
Wagner, P.
2006-08-01
The data presented here show that human drivers apply a discrete noisy control mechanism to drive their vehicle. A car-following model built on these observations, together with some physical limitations (crash-freeness, acceleration), lead to non-Gaussian probability distributions in the speed difference and distance which are in good agreement with empirical data. All model parameters have a clear physical meaning and can be measured. Despite its apparent complexity, this model is simple to understand and might serve as a starting point to develop even quantitatively correct models.
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Evidence for dust transport in Viking IR thermal mapper opacity data
NASA Technical Reports Server (NTRS)
Martin, Terry Z.
1993-01-01
Global maps of 9-micron dust opacity derived from radiometric observations made by the Viking Orbiter IR Thermal Mapper instruments have revealed a wealth of new information about the distribution of airborne dust over 1.36 Mars years from 1976-1979. In particular, the changing dust distribution during major dust storms is of interest since the data provide a point of contact with both Earth-based observations of storm growth and with global circulation models.
NASA Astrophysics Data System (ADS)
Ushenko, Yu. A.; Wanchuliak, O. Y.
2013-06-01
The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet-coefficients polarization maps of myocardium layers and death reasons.
Calibrating binary lumped parameter models
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Stewart, Mike
2017-04-01
Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.
Schoville, Benjamin J; Brown, Kyle S; Harris, Jacob A; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages-Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed.
Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886
Kohut, Sviataslau V; Staroverov, Viktor N
2013-10-28
The exchange-correlation potential of Kohn-Sham density-functional theory, vXC(r), can be thought of as an electrostatic potential produced by the static charge distribution qXC(r) = -(1∕4π)∇(2)vXC(r). The total exchange-correlation charge, QXC = ∫qXC(r) dr, determines the rate of the asymptotic decay of vXC(r). If QXC ≠ 0, the potential falls off as QXC∕r; if QXC = 0, the decay is faster than coulombic. According to this rule, exchange-correlation potentials derived from standard generalized gradient approximations (GGAs) should have QXC = 0, but accurate numerical calculations give QXC ≠ 0. We resolve this paradox by showing that the charge density qXC(r) associated with every GGA consists of two types of contributions: a continuous distribution and point charges arising from the singularities of vXC(r) at each nucleus. Numerical integration of qXC(r) accounts for the continuous charge but misses the point charges. When the point-charge contributions are included, one obtains the correct QXC value. These findings provide an important caveat for attempts to devise asymptotically correct Kohn-Sham potentials by modeling the distribution qXC(r).
Point clouds segmentation as base for as-built BIM creation
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2015-08-01
In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.
An atomistic geometrical model of the B-DNA configuration for DNA-radiation interaction simulations
NASA Astrophysics Data System (ADS)
Bernal, M. A.; Sikansi, D.; Cavalcante, F.; Incerti, S.; Champion, C.; Ivanchenko, V.; Francis, Z.
2013-12-01
In this paper, an atomistic geometrical model for the B-DNA configuration is explained. This model accounts for five organization levels of the DNA, up to the 30 nm chromatin fiber. However, fragments of this fiber can be used to construct the whole genome. The algorithm developed in this work is capable to determine which is the closest atom with respect to an arbitrary point in space. It can be used in any application in which a DNA geometrical model is needed, for instance, in investigations related to the effects of ionizing radiations on the human genetic material. Successful consistency checks were carried out to test the proposed model. Catalogue identifier: AEPZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1245 No. of bytes in distributed program, including test data, etc.: 6574 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any. Operating system: Multi-platform. RAM: 2 Gb Classification: 3. Nature of problem: The Monte Carlo method is used to simulate the interaction of ionizing radiation with the human genetic material in order to determine DNA damage yields per unit absorbed dose. To accomplish this task, an algorithm to determine if a given energy deposition lies within a given target is needed. This target can be an atom or any other structure of the genetic material. Solution method: This is a stand-alone subroutine describing an atomic-resolution geometrical model of the B-DNA configuration. It is able to determine the closest atom to an arbitrary point in space. This model accounts for five organization levels of the human genetic material, from the nucleotide pair up to the 30 nm chromatin fiber. This subroutine carries out a series of coordinate transformations to find which is the closest atom containing an arbitrary point in space. Atom sizes are according to the corresponding van der Waals radii. Restrictions: The geometrical model presented here does not include the chromosome organization level but it could be easily build up by using fragments of the 30 nm chromatin fiber. Unusual features: To our knowledge, this is the first open source atomic-resolution DNA geometrical model developed for DNA-radiation interaction Monte Carlo simulations. In our tests, the current model took into account the explicit position of about 56×106 atoms, although the user may enhance this amount according to the necessities. Running time: This subroutine can process about 2 million points within a few minutes in a typical current computer.
NASA Technical Reports Server (NTRS)
Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.
1992-01-01
The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.
Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P
2015-03-01
Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
NASA Astrophysics Data System (ADS)
Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo
2017-05-01
Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much larger number of conditioning piezometric heads, multiple point geostatistics and ERT could improve aquifer characterization. This shows that state of the art stochastic methods need to be supported by abundant and high-quality subsurface data.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
Polarized radiance distribution measurement of skylight. II. Experiment and data.
Liu, Y; Voss, K
1997-11-20
Measurements of the skylight polarized radiance distribution were performed at different measurement sites, atmospheric conditions, and three wavelengths with our newly developed Polarization Radiance Distribution Camera System (RADS-IIP), an analyzer-type Stokes polarimeter. Three Stokes parameters of skylight (I, Q, U), the degree of polarization, and the plane of polarization are presented in image format. The Arago point and neutral lines have been observed with RADS-IIP. Qualitatively, the dependence of the intensity and polarization data on wavelength, solar zenith angle, and surface albedo is in agreement with the results from computations based on a plane-parallel Rayleigh atmospheric model.
NASA Technical Reports Server (NTRS)
Smith, G. L.; Green, R. N.; Young, G. R.
1974-01-01
The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.
NASA Astrophysics Data System (ADS)
Vâjâiac, Sorin Nicolae; Filip, Valeriu; Štefan, Sabina; Boscornea, Andreea
2014-03-01
The paper describes a method of assessing the size distribution of fog droplets in a cloud chamber, based on measuring the time variation of the transmission of a light beam during the gravitational settling of droplets. Using a model of light extinction by floating spherical particles, the size distribution of droplets is retrieved, along with characteristic structural parameters of the fog (total droplet concentration, liquid water content and effective radius). Moreover, the time variation of the effective radius can be readily extracted from the model. The errors of the method are also estimated and fall within acceptable limits. The method proves sensitive enough to resolve various modes in the droplet distribution and to point out changes in the distribution due to diverse types of aerosol present in the chamber or to the thermal condition of the fog. It is speculated that the method can be further simplified to reach an in-situ version for real-time field measurements.
Directional reflectance factor distributions of a cotton row crop
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Newcomb, W. W.; Schutt, J. B.; Pinter, P. J., Jr.; Jackson, R. D.
1984-01-01
The directional reflectance factor distribution spanning the entire exitance hemisphere was measured for a cotton row crop (Gossypium barbadense L.) with 39 percent ground cover. Spectral directional radiances were taken in NOAA satellite 7 AVHRR bands 1 and 2 using a three-band radiometer with restricted 12 deg full angle field of view at half peak power points. Polar co-ordinate system plots of directional reflectance factor distributions and three-dimensional computer graphic plots of scattered flux were used to study the dynamics of the directional reflectance factor distribution as a function of spectral band, geometric structure of the scene, solar zenith and azimuth angles, and optical properties of the leaves and soil. The factor distribution of the incomplete row crops was highly polymodal relative to that for complete vegetation canopies. Besides the enhanced reflectance for the antisolar point, a reflectance minimum was observed towards the forwardscatter direction in the principle plane of the sun. Knowledge of the mechanics of the observed dynamics of the data may be used to provide rigorous validation for two- or three-dimensional radiative transfer models, and is important in interpreting aircraft and satellite data where the solar angle varies widely.