Sample records for point processes modeling

  1. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  3. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  4. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  5. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  6. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  7. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  8. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  9. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  12. Hierarchical species distribution models

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  13. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  14. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  15. Examining the Process of Responding to Circumplex Scales of Interpersonal Values Items: Should Ideal Point Scoring Methods Be Considered?

    PubMed

    Ling, Ying; Zhang, Minqiang; Locke, Kenneth D; Li, Guangming; Li, Zonglong

    2016-01-01

    The Circumplex Scales of Interpersonal Values (CSIV) is a 64-item self-report measure of goals from each octant of the interpersonal circumplex. We used item response theory methods to compare whether dominance models or ideal point models best described how people respond to CSIV items. Specifically, we fit a polytomous dominance model called the generalized partial credit model and an ideal point model of similar complexity called the generalized graded unfolding model to the responses of 1,893 college students. The results of both graphical comparisons of item characteristic curves and statistical comparisons of model fit suggested that an ideal point model best describes the process of responding to CSIV items. The different models produced different rank orderings of high-scoring respondents, but overall the models did not differ in their prediction of criterion variables (agentic and communal interpersonal traits and implicit motives).

  16. Dynamic Models of Insurgent Activity

    DTIC Science & Technology

    2014-05-19

    Martin Short, P. Jeffrey Brantingham, Frederick Schoenberg, George Tita . Self-Exciting Point Process Modeling of Crime, Journal of the American...Mohler, P. J. Brantingham, G. E. Tita . Gang rivalry dynamics via coupled point process networks, Discrete and Continuous Dynamical Systems - Series...8532-2-1 Laura Smith, Andrea Bertozzi, P. Jeffrey Brantingham, George Tita , Matthew Valasik. ADAPTATION OF AN ECOLOGICAL TERRITORIAL MODEL TOSTREET

  17. Smooth random change point models.

    PubMed

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  18. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  19. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  20. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  1. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  2. Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation

    NASA Astrophysics Data System (ADS)

    Stephenson, Jerry L.; Kapraun, Chris

    Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.

  3. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  4. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  5. Two stage fluid bed-plasma gasification process for solid waste valorisation: Technical review and preliminary thermodynamic modelling of sulphur emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrin, Shane, E-mail: shane.morrin@ucl.ac.uk; Advanced Plasma Power, South Marston Business park, Swindon, SN3 4DE; Lettieri, Paola, E-mail: p.lettieri@ucl.ac.uk

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer We investigate sulphur during MSW gasification within a fluid bed-plasma process. Black-Right-Pointing-Pointer We review the literature on the feed, sulphur and process principles therein. Black-Right-Pointing-Pointer The need for research in this area was identified. Black-Right-Pointing-Pointer We perform thermodynamic modelling of the fluid bed stage. Black-Right-Pointing-Pointer Initial findings indicate the prominence of solid phase sulphur. - Abstract: Gasification of solid waste for energy has significant potential given an abundant feed supply and strong policy drivers. Nonetheless, significant ambiguities in the knowledge base are apparent. Consequently this study investigates sulphur mechanisms within a novel two stage fluid bed-plasma gasification process.more » This paper includes a detailed review of gasification and plasma fundamentals in relation to the specific process, along with insight on MSW based feedstock properties and sulphur pollutant therein. As a first step to understanding sulphur partitioning and speciation within the process, thermodynamic modelling of the fluid bed stage has been performed. Preliminary findings, supported by plant experience, indicate the prominence of solid phase sulphur species (as opposed to H{sub 2}S) - Na and K based species in particular. Work is underway to further investigate and validate this.« less

  6. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  7. The Point of Creative Frustration and the Creative Process: A New Look at an Old Model.

    ERIC Educational Resources Information Center

    Sapp, D. David

    1992-01-01

    This paper offers an extension of Graham Wallas' model of the creative process. It identifies periods of problem solving, incubation, and growth with specific points of initial idea inception, creative frustration, and illumination. Responses to creative frustration are described including denial, rationalization, acceptance of stagnation, and new…

  8. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  9. Application of Bayesian techniques to model the burden of human salmonellosis attributable to U.S. food commodities at the point of processing: adaptation of a Danish model.

    PubMed

    Guo, Chuanfa; Hoekstra, Robert M; Schroeder, Carl M; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J; Cole, Dana

    2011-04-01

    Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. © Mary Ann Liebert, Inc.

  10. Application of Bayesian Techniques to Model the Burden of Human Salmonellosis Attributable to U.S. Food Commodities at the Point of Processing: Adaptation of a Danish Model

    PubMed Central

    Guo, Chuanfa; Hoekstra, Robert M.; Schroeder, Carl M.; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G.; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J.

    2011-01-01

    Abstract Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. PMID:21235394

  11. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  12. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  13. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  14. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  15. Effects of transcutaneous electrical nerve stimulation on rats with the third lumbar vertebrae transverse process syndrome.

    PubMed

    Li, Huan; Shang, Xiao-Jun; Dong, Qi-Rong

    2015-10-01

    To investigate the analgesic and anti-inflammatory effects of transcutaneous electrical nerve stimulation (TENS) at local or distant acupuncture points in a rat model of the third lumbar vertebrae transverse process syndrome. Forty Sprague-Dawley rats were randomly divided into control, model, model plus local acupuncture point stimulation at BL23 (model+LAS) and model plus distant acupuncture point stimulation at ST36 (model+DAS) groups. All rats except controls underwent surgical third lumbar vertebrae transverse process syndrome modelling on day 2. Thereafter, rats in the model+LAS and model+DAS groups were treated daily with TENS for a total of six treatments (2/100 Hz, 30 min/day) from day 16 to day 29. Thermal pain thresholds were measured once a week during treatment and were continued until day 57, when local muscle tissue was sampled for RT-PCR and histopathological examination after haematoxylin and eosin staining. mRNA expression of interleukin-1 β (IL-1β), tumour necrosis factor-α (TNF-α) and inducible nitric oxide synthase (iNOS) was determined. Thermal pain thresholds of all model rats decreased relative to the control group. Both LAS and DAS significantly increased the thermal pain threshold at all but one point during the treatment period. Histopathological assessment revealed that the local muscle tissues around the third lumbar vertebrae transverse process recovered to some degree in both the model+LAS and model+DAS groups; however, LAS appeared to have a greater effect. mRNA expression of IL-1β, TNF-α and iNOS in the local muscle tissues was increased after modelling and attenuated in both model+LAS and model+DAS groups. The beneficial effect was greater after LAS than after DAS. TENS at both local (BL23) and distant (ST36) acupuncture points had a pain-relieving effect in rats with the third lumbar vertebrae transverse process syndrome, and LAS appeared to have greater anti-inflammatory and analgesic effects than DAS. 09073. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Non-hoop winding effect on bonding temperature of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2018-05-01

    One of the advanced methods for production of thermoplastic composite methods is laser assisted tape winding (LATW). Predicting the temperature in LATW process is very important since the temperature at nip-point (bonding line through width) plays a pivotal role in a proper bonding and hence the mechanical performance. Despite the hoop-winding where the nip-point is the straight line, non-hoop winding includes a curved nip-point line. Hence, the non-hoop winding causes somewhat a different power input through laser-rays and-reflections and consequently generates unknown complex temperature profile on the curved nip-point line. Investigating the temperature at the nip-point line is the point of interest in this study. In order to understand this effect, a numerical model is proposed to capture the effect of laser-rays and their reflections on the nip-point temperature. To this end, a 3D optical model considering the objects in LATW process is considered. Then, the power distribution (absorption and reflection) from the optical analysis is used as an input (heat flux distribution) for the thermal analysis. The thermal analysis employs a fully-implicit advection-diffusion model to calculate the temperature on the surfaces. The results are examined to demonstrate the effect of winding direction on the curved nip-point line (tape width) which has not been considered in literature up to now. Furthermore, the results can be used for designing a better and more efficient setup in the LATW process.

  17. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  18. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  19. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  20. Elementary model of severe plastic deformation by KoBo process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusak, A.; Storozhuk, N.; Danielewski, M., E-mail: daniel@agh.edu.pl

    2014-01-21

    Self-consistent model of generation, interaction, and annihilation of point defects in the gradient of oscillating stresses is presented. This model describes the recently suggested method of severe plastic deformation by combination of pressure and oscillating rotations of the die along the billet axis (KoBo process). Model provides the existence of distinct zone of reduced viscosity with sharply increased concentration of point defects. This zone provides the high extrusion velocity. Presented model confirms that the Severe Plastic Deformation (SPD) in KoBo may be treated as non-equilibrium phase transition of abrupt drop of viscosity in rather well defined spatial zone. In thismore » very zone, an intensive lateral rotational movement proceeds together with generation of point defects which in self-organized manner make rotation possible by the decrease of viscosity. The special properties of material under KoBo version of SPD can be described without using the concepts of nonequilibrium grain boundaries, ballistic jumps and amorphization. The model can be extended to include different SPD processes.« less

  1. The stability analysis of the nutrition restricted dynamic model of the microalgae biomass growth

    NASA Astrophysics Data System (ADS)

    Ratianingsih, R.; Fitriani, Nacong, N.; Resnawati, Mardlijah, Widodo, B.

    2018-03-01

    The biomass production is very essential in microalgae farming such that its growth rate is very important to be determined. This paper proposes the dynamics model of it that restricted by its nutrition. The model is developed by considers some related processes that are photosynthesis, respiration, nutrition absorption, stabilization, lipid synthesis and CO2 mobilization. The stability of the dynamical system that represents the processes is analyzed using the Jacobian matrix of the linearized system in the neighborhood of its critical point. There is a lipid formation threshold needed to require its existence. In such case, the absorption rate of respiration process has to be inversely proportional to the absorption rate of CO2 due to photosynthesis process. The Pontryagin minimal principal also shows that there are some requirements needed to have a stable critical point, such as the rate of CO2 released rate, due to the stabilization process that is restricted by 50%, and the threshold of its shifted critical point. In case of the rate of CO2 released rate due to the photosynthesis process is restricted in such interval; the stability of the model at the critical point could not be satisfied anymore. The simulation shows that the external nutrition plays a role in glucose formation such that sufficient for the biomass growth and the lipid production.

  2. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  3. Biostereometric Data Processing In ERGODATA: Choice Of Human Body Models

    NASA Astrophysics Data System (ADS)

    Pineau, J. C.; Mollard, R.; Sauvignon, M.; Amphoux, M.

    1983-07-01

    The definition of human body models was elaborated with anthropometric data from ERGODATA. The first model reduces the human body into a series of points and lines. The second model is well adapted to represent volumes of each segmentary element. The third is an original model built from the conventional anatomical points. Each segment is defined in space by a tri-angular plane located with its 3-D coordinates. This new model can answer all the processing possibilities in the field of computer-aided design (C.A.D.) in ergonomy but also biomechanics and orthopaedics.

  4. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  5. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  6. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  7. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  8. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  9. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  10. A statistical model investigating the prevalence of tuberculosis in New York City using counting processes with two change-points

    PubMed Central

    ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.

    2008-01-01

    SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287

  11. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Treesearch

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  12. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  13. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Barbak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a next generation CED; nin addition to a point design, the team develops a model of the local trade space. The process is a balance between the power of model-developing tools and the creativity of human experts, enabling the development of a variety of trade models for any space mission.

  14. Size-Class Effect Contributes to Tree Species Assembly through Influencing Dispersal in Tropical Forests

    PubMed Central

    Hu, Yue-Hua; Kitching, Roger L.; Lan, Guo-Yu; Zhang, Jiao-Lin; Sha, Li-Qing; Cao, Min

    2014-01-01

    We have investigated the processes of community assembly using size classes of trees. Specifically our work examined (1) whether point process models incorporating an effect of size-class produce more realistic summary outcomes than do models without this effect; (2) which of three selected models incorporating, respectively environmental effects, dispersal and the joint-effect of both of these, is most useful in explaining species-area relationships (SARs) and point dispersion patterns. For this evaluation we used tree species data from the 50-ha forest dynamics plot in Barro Colorado Island, Panama and the comparable 20 ha plot at Bubeng, Southwest China. Our results demonstrated that incorporating an size-class effect dramatically improved the SAR estimation at both the plots when the dispersal only model was used. The joint effect model produced similar improvement but only for the 50-ha plot in Panama. The point patterns results were not improved by incorporation of size-class effects using any of the three models. Our results indicate that dispersal is likely to be a key process determining both SARs and point patterns. The environment-only model and joint-effects model were effective at the species level and the community level, respectively. We conclude that it is critical to use multiple summary characteristics when modelling spatial patterns at the species and community levels if a comprehensive understanding of the ecological processes that shape species’ distributions is sought; without this results may have inherent biases. By influencing dispersal, the effect of size-class contributes to species assembly and enhances our understanding of species coexistence. PMID:25251538

  15. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  16. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  17. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline can be drawn. The designed scripts are able to ensure for simple point clouds: the elimination of almost all noise points and the reconstruction of a CAD model.

  18. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  19. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  20. A NASTRAN model of a large flexible swing-wing bomber. Volume 5: NASTRAN model development-fairing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fairing structure was expanded in detail to generate the NASTRAN model of this substructure. The grid point coordinates, element definitions, material properties, and sizing data for each element were specified. The fairing model was thoroughly checked out for continuity, connectivity, and constraints. The substructure was processed for structural influence coefficients (SIC) point loadings to determine the deflection characteristics of the fairing model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  1. The Evaluation of HOMER as a Marine Corps Expeditionary Energy Predeployment Tool

    DTIC Science & Technology

    2010-09-01

    experiment was used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...Brigade–Afghanistan xvi MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER

  2. The Evaluation of HOMER as a Marine Corps Expeditionary Energy Pre-deployment Tool

    DTIC Science & Technology

    2010-11-21

    used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and Space Administration...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER

  3. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  4. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1994-01-01

    This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  6. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  7. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  8. The Improvement of the Closed Bounded Volume (CBV) Evaluation Methods to Compute a Feasible Rough Machining Area Based on Faceted Models

    NASA Astrophysics Data System (ADS)

    Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos

    2017-06-01

    The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.

  9. Modeling and Visualization Process of the Curve of Pen Point by GeoGebra

    ERIC Educational Resources Information Center

    Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba

    2013-01-01

    This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…

  10. A NASTRAN model of a large flexible swing-wing bomber. Volume 3: NASTRAN model development-wing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the wing structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The wing substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  11. A NASTRAN model of a large flexible swing-wing bomber. Volume 2: NASTRAN model development-horizontal stabilzer, vertical stabilizer and nacelle structures

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.; Tisher, E. D.

    1982-01-01

    The NASTRAN model plans for the horizontal stabilizer, vertical stabilizer, and nacelle structure were expanded in detail to generate the NASTRAN model for each of these substructures. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. Each substructure model was thoroughly checked out for continuity, connectivity, and constraints. These substructures were processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail models. Finally, a demonstration and validation processing of these substructures was accomplished using the NASTRAN finite element program installed at NASA/DFRC facility.

  12. A NASTRAN model of a large flexible swing-wing bomber. Volume 4: NASTRAN model development-fuselage structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fuselage structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The fuselage substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  13. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.

  14. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  15. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  16. Self-Exciting Point Process Models of Civilian Deaths in Iraq

    DTIC Science & Technology

    2010-01-01

    Tita , 2009), we propose that violence in Iraq arises from a combination of exogenous and en- dogenous effects. Spatial heterogeneity in background...Schoenberg, and Tita (2010) where they analyze burgarly and robbery data in Los Angeles. Related work has also been done 2 in Short et al. (2009) where...Control , 4 , 215–240. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., & Tita , G. E. (2010). Self- exciting point process modeling of

  17. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Babak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design (CED) teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a 'next-generation CED; in addition to a point design, the Team develops a model of the local trade space. The process is a balance between the power of a model developing tools and the creativity of humal experts, enabling the development of a variety of trade models for any space mission. This paper reviews the modeling method and its practical implementation in the ED environment. Example results illustrate the benefit of this approach.

  18. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  19. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  20. Final Report Collaborative Project. Improving the Representation of Coastal and Estuarine Processes in Earth System Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Frank; Dennis, John; MacCready, Parker

    This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less

  1. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  2. Qumquad: a UML-based approach for remodeling of legacy systems in health care.

    PubMed

    Garde, Sebastian; Knaup, Petra; Herold, Ralf

    2003-07-01

    Health care information systems still comprise legacy systems to a certain extent. For reengineering legacy systems a thorough remodeling is inalienable. Current modeling techniques like the Unified Modeling Language (UML) do not offer a systematic and comprehensive process-oriented method for remodeling activities. We developed a systematic method for remodeling legacy systems in health care called Qumquad. Qumquad consists of three major steps: (i) modeling the actual state of the application system, (ii) systematic identification of weak points in this model and (iii) development of a target concept for the reimplementation considering the identified weak points. We applied Qumquad for remodeling a documentation and therapy planning system for pediatric oncology (DOSPO). As a result of our remodeling activities we regained an abstract model of the system, an analysis of the current weak points of DOSPO and possible (partly alternative) solutions to overcome the weak points. Qumquad proved to be very helpful in the reengineering process of DOSPO since we now have at our disposal a comprehensive model for the reimplementation of DOSPO that current users of the system agree on. Qumquad can easily be applied to other reengineering projects in health care.

  3. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  4. lidar change detection using building models

    NASA Astrophysics Data System (ADS)

    Kim, Angela M.; Runyon, Scott C.; Jalobeanu, Andre; Esterline, Chelsea H.; Kruse, Fred A.

    2014-06-01

    Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.

  5. Fitting measurement models to vocational interest data: are dominance models ideal?

    PubMed

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  6. Nonlinear digital signal processing in mental health: characterization of major depression using instantaneous entropy measures of heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo

    2015-01-01

    Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.

  7. Renormalized Energy Concentration in Random Matrices

    NASA Astrophysics Data System (ADS)

    Borodin, Alexei; Serfaty, Sylvia

    2013-05-01

    We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.

  8. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  9. Local stability of a five dimensional food chain model in the ocean

    NASA Astrophysics Data System (ADS)

    Kusumawinahyu, W. M.; Hidayatulloh, M. R.

    2014-02-01

    This paper discuss a food chain model on a microbiology ecosystem in the ocean, where predation process occurs. Four population growth rates are discussed, namely bacteria, phytoplankton, zooplankton, and protozoa growth rate. When the growth of nutrient density is also considered, the model is governed by a five dimensional dynamical system. The system considered in this paper is a modification of a model proposed by Hadley and Forbes [1], by taking Holling Type I as the functional response. For sake of simplicity, the model needs to be scaled. Dynamical behavior, such as existence condition of equilibrium points and their local stability are addressed. There are eight equilibrium points, where two of them exist under certain conditions. Three equilibrium points are unstable, while two points stable under certain conditions and the other three points are stable if the Ruth-Hurwitz criteria are satisfied. Numerical simulations are carried out to illustrate analytical findings.

  10. Combining SVM and flame radiation to forecast BOF end-point

    NASA Astrophysics Data System (ADS)

    Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru

    2009-05-01

    Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.

  11. Modeling of turbulent transport as a volume process

    NASA Technical Reports Server (NTRS)

    Jennings, Mark J.; Morel, Thomas

    1987-01-01

    An alternative type of modeling was proposed for the turbulent transport terms in Reynolds-averaged equations. One particular implementation of the model was considered, based on the two-point velocity correlations. The model was found to reproduce the trends but not the magnitude of the nonisotropic behavior of the turbulent transport. Some interesting insights were developed concerning the shape of the contracted two-point correlation volume. This volume is strongly deformed by mean shear from the spherical shape found in unstrained flows. Of particular interest is the finding that the shape is sharply waisted, indicating preferential lines of communication, which should have a direct effect on turbulent transfer and on other processes.

  12. Self-Exciting Point Process Modeling of Conversation Event Sequences

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  13. MRMAide: a mixed resolution modeling aide

    NASA Astrophysics Data System (ADS)

    Treshansky, Allyn; McGraw, Robert M.

    2002-07-01

    The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.

  14. An analysis of the Petri net based model of the human body iron homeostasis process.

    PubMed

    Sackmann, Andrea; Formanowicz, Dorota; Formanowicz, Piotr; Koch, Ina; Blazewicz, Jacek

    2007-02-01

    In the paper a Petri net based model of the human body iron homeostasis is presented and analyzed. The body iron homeostasis is an important but not fully understood complex process. The modeling of the process presented in the paper is expressed in the language of Petri net theory. An application of this theory to the description of biological processes allows for very precise analysis of the resulting models. Here, such an analysis of the body iron homeostasis model from a mathematical point of view is given.

  15. The prediction of the flash point for binary aqueous-organic solutions.

    PubMed

    Liaw, Horng-Jang; Chiu, Yi-Yu

    2003-07-18

    A mathematical model, which may be used for predicting the flash point of aqueous-organic solutions, has been proposed and subsequently verified by experimentally-derived data. The results reveal that this model is able to precisely predict the flash point over the entire composition range of binary aqueous-organic solutions by way of utilizing the flash point data pertaining to the flammable component. The derivative of flash point with respect to composition (solution composition effect upon flash point) can be applied to process safety design/operation in order to identify as to whether the dilution of a flammable liquid solution with water is effective in reducing the fire and explosion hazard of the solution at a specified composition. Such a derivative equation was thus derived based upon the flash point prediction model referred to above and then verified by the application of experimentally-derived data.

  16. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  17. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  18. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  19. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity. PMID:28234899

  20. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

    PubMed

    Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson

    2017-02-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.

  1. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  2. Modeling of the reburning process using sewage sludge-derived syngas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werle, Sebastian, E-mail: sebastian.werle@polsl.pl

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Gasification provides an attractive method for sewage sludges treatment. Black-Right-Pointing-Pointer Gasification generates a fuel gas (syngas) which can be used as a reburning fuel. Black-Right-Pointing-Pointer Reburning potential of sewage sludge gasification gases was defined. Black-Right-Pointing-Pointer Numerical simulation of co-combustion of syngases in coal fired boiler has been done. Black-Right-Pointing-Pointer Calculation shows that analysed syngases can provide higher than 80% reduction of NO{sub x}. - Abstract: Gasification of sewage sludge can provide clean and effective reburning fuel for combustion applications. The motivation of this work was to define the reburning potential of the sewage sludge gasification gas (syngas). Amore » numerical simulation of the co-combustion process of syngas in a hard coal-fired boiler was done. All calculations were performed using the Chemkin programme and a plug-flow reactor model was used. The calculations were modelled using the GRI-Mech 2.11 mechanism. The highest conversions for nitric oxide (NO) were obtained at temperatures of approximately 1000-1200 K. The combustion of hard coal with sewage sludge-derived syngas reduces NO emissions. The highest reduction efficiency (>90%) was achieved when the molar flow ratio of the syngas was 15%. Calculations show that the analysed syngas can provide better results than advanced reburning (connected with ammonia injection), which is more complicated process.« less

  3. Reconstruction of dynamical systems from resampled point processes produced by neuron models

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Pavlov, Alexey N.

    2018-04-01

    Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.

  4. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure

  5. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    NASA Astrophysics Data System (ADS)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  6. Sensory processing and world modeling for an active ranging device

    NASA Technical Reports Server (NTRS)

    Hong, Tsai-Hong; Wu, Angela Y.

    1991-01-01

    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented.

  7. A Typology for Modeling Processes in Clinical Guidelines and Protocols

    NASA Astrophysics Data System (ADS)

    Tu, Samson W.; Musen, Mark A.

    We analyzed the graphical representations that are used by various guideline-modeling methods to express process information embodied in clinical guidelines and protocols. From this analysis, we distilled four modeling formalisms and the processes they typically model: (1) flowcharts for capturing problem-solving processes, (2) disease-state maps that link decision points in managing patient problems over time, (3) plans that specify sequences of activities that contribute toward a goal, (4) workflow specifications that model care processes in an organization. We characterized the four approaches and showed that each captures some aspect of what a guideline may specify. We believe that a general guideline-modeling system must provide explicit representation for each type of process.

  8. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  9. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  10. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  11. Mathematical Modeling of Thermofrictional Milling Process Using ANSYS WB Software

    NASA Astrophysics Data System (ADS)

    Sherov, K. T.; Sikhimbayev, M. R.; Sherov, A. K.; Donenbayev, B. S.; Rakishev, A. K.; Mazdubai, A. B.; Musayev, M. M.; Abeuova, A. M.

    2017-06-01

    This article presents ANSYS WB-based mathematical modelling of the thermofrictional milling process, which allowed studying the dynamics of thermal and physical processes occurring during the processing. The technique used also allows determination of the optimal cutting conditions of thermofrictional milling for processing various materials, in particular steel 40CN2MA, 30CGSA, 45, 3sp. In our study, from among a number of existing models of cutting fracture, we chose the criterion first proposed by prof. V. L. Kolmogorov. In order to increase the calculations performance, a mathematical model was proposed, that used only two objects: a parallelepiped-shaped workpiece and a cutting insert in the form of a pentagonal prism. In addition, the work takes into account the friction coefficient between a cutting insert and a workpiece taken equal to 0.4 mm. To determine the temperature in the subcontact layer of the workpiece, we introduced the coordinates of nine characteristic points with the same interval in the local coordinate system. As a result, the temperature values were obtained for different materials at the studied points during the cutter speed change. The research results showed the possibility of controlling thermal processes during processing by choosing the optimum cutting modes.

  12. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    NASA Astrophysics Data System (ADS)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  13. Reconstruction of 3d Objects of Assets and Facilities by Using Benchmark Points

    NASA Astrophysics Data System (ADS)

    Baig, S. U.; Rahman, A. A.

    2013-08-01

    Acquiring and modeling 3D geo-data of building assets and facility objects is one of the challenges. A number of methods and technologies are being utilized for this purpose. Total station, GPS, photogrammetric and terrestrial laser scanning are few of these technologies. In this paper, points commonly shared by potential facades of assets and facilities modeled from point clouds are identified. These points are useful for modeling process to reconstruct 3D models of assets and facilities stored to be used for management purposes. These models are segmented through different planes to produce accurate 2D plans. This novel method improves the efficiency and quality of construction of models of assets and facilities with the aim utilize in 3D management projects such as maintenance of buildings or group of items that need to be replaced, or renovated for new services.

  14. Theoretical model of dynamic spin polarization of nuclei coupled to paramagnetic point defects in diamond and silicon carbide

    NASA Astrophysics Data System (ADS)

    Ivády, Viktor; Szász, Krisztián; Falk, Abram L.; Klimov, Paul V.; Christle, David J.; Janzén, Erik; Abrikosov, Igor A.; Awschalom, David D.; Gali, Adam

    2015-09-01

    Dynamic nuclear spin polarization (DNP) mediated by paramagnetic point defects in semiconductors is a key resource for both initializing nuclear quantum memories and producing nuclear hyperpolarization. DNP is therefore an important process in the field of quantum-information processing, sensitivity-enhanced nuclear magnetic resonance, and nuclear-spin-based spintronics. DNP based on optical pumping of point defects has been demonstrated by using the electron spin of nitrogen-vacancy (NV) center in diamond, and more recently, by using divacancy and related defect spins in hexagonal silicon carbide (SiC). Here, we describe a general model for these optical DNP processes that allows the effects of many microscopic processes to be integrated. Applying this theory, we gain a deeper insight into dynamic nuclear spin polarization and the physics of diamond and SiC defects. Our results are in good agreement with experimental observations and provide a detailed and unified understanding. In particular, our findings show that the defect electron spin coherence times and excited state lifetimes are crucial factors in the entire DNP process.

  15. Atmospheric Modeling And Sensor Simulation (AMASS) study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1984-01-01

    The capabilities of the atmospheric modeling and sensor simulation (AMASS) system were studied in order to enhance them. This system is used in processing atmospheric measurements which are utilized in the evaluation of sensor performance, conducting design-concept simulation studies, and also in the modeling of the physical and dynamical nature of atmospheric processes. The study tasks proposed in order to both enhance the AMASS system utilization and to integrate the AMASS system with other existing equipment to facilitate the analysis of data for modeling and image processing are enumerated. The following array processors were evaluated for anticipated effectiveness and/or improvements in throughput by attachment of the device to the P-e: (1) Floating Point Systems AP-120B; (2) Floating Point Systems 5000; (3) CSP, Inc. MAP-400; (4) Analogic AP500; (5) Numerix MARS-432; and (6) Star Technologies, Inc. ST-100.

  16. Mesoscale analysis of failure in quasi-brittle materials: comparison between lattice model and acoustic emission data.

    PubMed

    Grégoire, David; Verdon, Laura; Lefort, Vincent; Grassl, Peter; Saliba, Jacqueline; Regoin, Jean-Pierre; Loukili, Ahmed; Pijaudier-Cabot, Gilles

    2015-10-25

    The purpose of this paper is to analyse the development and the evolution of the fracture process zone during fracture and damage in quasi-brittle materials. A model taking into account the material details at the mesoscale is used to describe the failure process at the scale of the heterogeneities. This model is used to compute histograms of the relative distances between damaged points. These numerical results are compared with experimental data, where the damage evolution is monitored using acoustic emissions. Histograms of the relative distances between damage events in the numerical calculations and acoustic events in the experiments exhibit good agreement. It is shown that the mesoscale model provides relevant information from the point of view of both global responses and the local failure process. © 2015 The Authors. International Journal for Numerical and Analytical Methods in Geomechanics published by John Wiley & Sons Ltd.

  17. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE PAGES

    Li, Mingjie; Zhou, Ping; Wang, Hong; ...

    2017-09-19

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  18. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingjie; Zhou, Ping; Wang, Hong

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  19. Integrated catchment modelling within a strategic planning and decision making process: Werra case study

    NASA Astrophysics Data System (ADS)

    Dietrich, Jörg; Funke, Markus

    Integrated water resources management (IWRM) redefines conventional water management approaches through a closer cross-linkage between environment and society. The role of public participation and socio-economic considerations becomes more important within the planning and decision making process. In this paper we address aspects of the integration of catchment models into such a process taking the implementation of the European Water Framework Directive (WFD) as an example. Within a case study situated in the Werra river basin (Central Germany), a systems analytic decision process model was developed. This model uses the semantics of the Unified Modeling Language (UML) activity model. As an example application, the catchment model SWAT and the water quality model RWQM1 were applied to simulate the effect of phosphorus emissions from non-point and point sources on water quality. The decision process model was able to guide the participants of the case study through the interdisciplinary planning and negotiation of actions. Further improvements of the integration framework include tools for quantitative uncertainty analyses, which are crucial for real life application of models within an IWRM decision making toolbox. For the case study, the multi-criteria assessment of actions indicates that the polluter pays principle can be met at larger scales (sub-catchment or river basin) without significantly compromising cost efficiency for the local situation.

  20. [Risk management--a new aspect of quality assessment in intensive care medicine: first results of an analysis of the DIVI's interdisciplinary quality assessment research group].

    PubMed

    Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch

    2006-10-01

    Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.

  1. The business process management software for successful quality management and organization: A case study from the University of Split School of Medicine.

    PubMed

    Sapunar, Damir; Grković, Ivica; Lukšić, Davor; Marušić, Matko

    2016-05-01

    Our aim was to describe a comprehensive model of internal quality management (QM) at a medical school founded on the business process analysis (BPA) software tool. BPA software tool was used as the core element for description of all working processes in our medical school, and subsequently the system served as the comprehensive model of internal QM. The quality management system at the University of Split School of Medicine included the documentation and analysis of all business processes within the School. The analysis revealed 80 weak points related to one or several business processes. A precise analysis of medical school business processes allows identification of unfinished, unclear and inadequate points in these processes, and subsequently the respective improvements and increase of the QM level and ultimately a rationalization of the institution's work. Our approach offers a potential reference model for development of common QM framework allowing a continuous quality control, i.e. the adjustments and adaptation to contemporary educational needs of medical students. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.

  2. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  3. DNA denaturation through a model of the partition points on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1994-08-01

    We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology.

  4. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  5. Business process modeling in healthcare.

    PubMed

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  6. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  7. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  8. Modelling the Air–Surface Exchange of Ammonia from the Field to Global Scale

    EPA Science Inventory

    The Working Group addressed the current understanding and uncertainties in the processes controlling ammonia (NH3) bi-directional exchange, and in the application of numerical models to describe these processes. As a starting point for the discussion, the Working Group drew on th...

  9. Spatial perspectives in state-and-transition models: A missing link to land management?

    USDA-ARS?s Scientific Manuscript database

    Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...

  10. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  11. SPY: A new scission point model based on microscopic ingredients to predict fission fragments properties

    NASA Astrophysics Data System (ADS)

    Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

    2013-12-01

    Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

  12. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  13. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).

  14. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  15. Modeling human faces with multi-image photogrammetry

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.

  16. Numerical modeling of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2017-10-01

    Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.

  17. Adaptive design of an X-ray magnetic circular dichroism spectroscopy experiment with Gaussian process modelling

    NASA Astrophysics Data System (ADS)

    Ueno, Tetsuro; Hino, Hideitsu; Hashimoto, Ai; Takeichi, Yasuo; Sawada, Masahiro; Ono, Kanta

    2018-01-01

    Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.

  18. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D.; Godtliebsen, F.; Rue, H.

    2012-04-01

    Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.

  19. Control surface in aerial triangulation

    NASA Astrophysics Data System (ADS)

    Jaw, Jen-Jer

    With the increased availability of surface-related sensors, the collection of surface information becomes easier and more straightforward than ever before. In this study, the author proposes a model in which the surface information is integrated into the aerial triangulation workflow by hypothesizing plane observations in the object space, the estimated object points via photo measurements (or matching) together with the adjusted surface points would provide a better point group describing the surface. The algorithms require no special structure of surface points and involve no interpolation process. The suggested measuring strategy (pairwise measurements) results in a quite fluent and favorable working environment when taking measurements. Furthermore, the extension of the model employing the the surface plane finds itself useful in tying photo models. The proposed model has been proven working by the simulation and carried out in the photogrammetric laboratory.

  20. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  1. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  2. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  3. Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models

    DTIC Science & Technology

    2015-07-06

    Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms

  4. Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)

    NASA Astrophysics Data System (ADS)

    Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos

    2017-04-01

    Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.

  5. Treatment of electronic waste to recover metal values using thermal plasma coupled with acid leaching - A response surface modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, Swagat S., E-mail: swagat.rath@gmail.com; Nayak, Pradeep; Mukherjee, P.S.

    2012-03-15

    Highlights: Black-Right-Pointing-Pointer Sentences/phrases were modified. Black-Right-Pointing-Pointer Necessary discussions for different figures were included. Black-Right-Pointing-Pointer More discussion have been included on the flue gas analysis. Black-Right-Pointing-Pointer Queries to both the reviewers have been given. - Abstract: The global crisis of the hazardous electronic waste (E-waste) is on the rise due to increasing usage and disposal of electronic devices. A process was developed to treat E-waste in an environmentally benign process. The process consisted of thermal plasma treatment followed by recovery of metal values through mineral acid leaching. In the thermal process, the E-waste was melted to recover the metal values asmore » a metallic mixture. The metallic mixture was subjected to acid leaching in presence of depolarizer. The leached liquor mainly contained copper as the other elements like Al and Fe were mostly in alloy form as per the XRD and phase diagram studies. Response surface model was used to optimize the conditions for leaching. More than 90% leaching efficiency at room temperature was observed for Cu, Ni and Co with HCl as the solvent, whereas Fe and Al showed less than 40% efficiency.« less

  6. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  7. Mass Measurements beyond the Major r-Process Waiting Point {sup 80}Zn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruah, S.; Herlert, A.; Schweikhard, L.

    2008-12-31

    High-precision mass measurements on neutron-rich zinc isotopes {sup 71m,72-81}Zn have been performed with the Penning trap mass spectrometer ISOLTRAP. For the first time, the mass of {sup 81}Zn has been experimentally determined. This makes {sup 80}Zn the first of the few major waiting points along the path of the astrophysical rapid neutron-capture process where neutron-separation energy and neutron-capture Q-value are determined experimentally. The astrophysical conditions required for this waiting point and its associated abundance signatures to occur in r-process models can now be mapped precisely. The measurements also confirm the robustness of the N=50 shell closure for Z=30.

  8. Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling

    NASA Astrophysics Data System (ADS)

    Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.

    2011-12-01

    Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.

  9. Management of reforming of housing-and-communal services

    NASA Astrophysics Data System (ADS)

    Skripnik, Oksana

    2017-10-01

    The international experience of reforming of housing and communal services is considered. The main scientific and methodical approaches of system transformation of the housing sphere are analyzed in the article. The main models of reforming are pointed out, interaction of participants of structural change process from the point of view of their commercial and social importance is characterized, advantages and shortcomings are revealed, model elements of the reform transformations from the point of view of the formation of investment appeal, competitiveness, energy efficiency and social importance of the carried-out actions are allocated.

  10. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  11. Improving the process of process modelling by the use of domain process patterns

    NASA Astrophysics Data System (ADS)

    Koschmider, Agnes; Reijers, Hajo A.

    2015-01-01

    The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process models would be to reuse relevant parts of existing models. At this point, however, limited support exists to guide process modellers towards the usage of appropriate model content. In this paper, a set of content-oriented patterns is presented, which is extracted from a large set of process models from the order management and manufacturing production domains. The patterns are derived using a newly proposed set of algorithms, which are being discussed in this paper. The authors demonstrate how such Domain Process Patterns, in combination with information on their historic usage, can support process modellers in generating new models. To support the wider dissemination and development of Domain Process Patterns within and beyond the studied domains, an accompanying website has been set up.

  12. Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.; Chen, L. C.

    2012-07-01

    Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  13. Construction and Updating of Event Models in Auditory Event Processing

    ERIC Educational Resources Information Center

    Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-01-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…

  14. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    NASA Astrophysics Data System (ADS)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  15. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  16. Bisous model-Detecting filamentary patterns in point processes

    NASA Astrophysics Data System (ADS)

    Tempel, E.; Stoica, R. S.; Kipper, R.; Saar, E.

    2016-07-01

    The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multi-dimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astro-statistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.

  17. [Establishment of model of traditional Chinese medicine injections post-marketing safety monitoring].

    PubMed

    Guo, Xin-E; Zhao, Yu-Bin; Xie, Yan-Ming; Zhao, Li-Cai; Li, Yan-Feng; Hao, Zhe

    2013-09-01

    To establish a nurse based post-marketing safety surveillance model for traditional Chinese medicine injections (TCMIs). A TCMIs safety monitoring team and a research hospital team engaged in the research, monitoring processes, and quality control processes were established, in order to achieve comprehensive, timely, accurate and real-time access to research data, to eliminate errors in data collection. A triage system involving a study nurse, as the first point of contact, clinicians and clinical pharmacists was set up in a TCM hospital. Following the specified workflow involving labeling of TCM injections and using improved monitoring forms it was found that there were no missing reports at the ratio of error was zero. A research nurse as the first and main point of contact in post-marketing safety monitoring of TCM as part of a triage model, ensures that research data collected has the characteristics of authenticity, accuracy, timeliness, integrity, and eliminate errors during the process of data collection. Hospital based monitoring is a robust and operable process.

  18. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right around the world. This field has hitherto relied on using very expensive high-resolution ensembles; by their very nature these can only run over small regions, and only for lead times up to about 2 days.

  19. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  20. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  1. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    PubMed

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-07-01

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Business Process Modeling: Perceived Benefits

    NASA Astrophysics Data System (ADS)

    Indulska, Marta; Green, Peter; Recker, Jan; Rosemann, Michael

    The process-centered design of organizations and information systems is globally seen as an appropriate response to the increased economic pressure on organizations. At the methodological core of process-centered management is process modeling. However, business process modeling in large initiatives can be a time-consuming and costly exercise, making it potentially difficult to convince executive management of its benefits. To date, and despite substantial interest and research in the area of process modeling, the understanding of the actual benefits of process modeling in academia and practice is limited. To address this gap, this paper explores the perception of benefits derived from process modeling initiatives, as reported through a global Delphi study. The study incorporates the views of three groups of stakeholders - academics, practitioners and vendors. Our findings lead to the first identification and ranking of 19 unique benefits associated with process modeling. The study in particular found that process modeling benefits vary significantly between practitioners and academics. We argue that the variations may point to a disconnect between research projects and practical demands.

  3. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  4. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  5. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  7. An Integrated Photogrammetric and Photoclinometric Approach for Pixel-Resolution 3d Modelling of Lunar Surface

    NASA Astrophysics Data System (ADS)

    Liu, W. C.; Wu, B.

    2018-04-01

    High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.

  8. Dynamics of Entropy in Quantum-like Model of Decision Making

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei; Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu

    2011-03-01

    We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices. By using this equilibrium point Alice determines her mixed (i.e., probabilistic) strategy with respect to Bob. Thus our model is a model of thinking through decoherence of initially pure mental state. Decoherence is induced by interaction with memory and external environment. In this paper we study (numerically) dynamics of quantum entropy of Alice's state in the process of decision making. Our analysis demonstrates that this dynamics depends nontrivially on the initial state of Alice's mind on her own actions and her prediction state (for possible actions of Bob.)

  9. 1/f Noise from nonlinear stochastic differential equations.

    PubMed

    Ruseckas, J; Kaulakys, B

    2010-03-01

    We consider a class of nonlinear stochastic differential equations, giving the power-law behavior of the power spectral density in any desirably wide range of frequency. Such equations were obtained starting from the point process models of 1/fbeta noise. In this article the power-law behavior of spectrum is derived directly from the stochastic differential equations, without using the point process models. The analysis reveals that the power spectrum may be represented as a sum of the Lorentzian spectra. Such a derivation provides additional justification of equations, expands the class of equations generating 1/fbeta noise, and provides further insights into the origin of 1/fbeta noise.

  10. From global circulation to flood loss: Coupling models across the scales

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Gomez-Navarro, Juan Jose; Bozhinova, Denica; Zischg, Andreas; Raible, Christoph C.; Ole, Roessler; Martius, Olivia; Weingartner, Rolf

    2017-04-01

    The prediction and the prevention of flood losses requires an extensive understanding of underlying meteorological, hydrological, hydraulic and damage processes. Coupled models help to improve the understanding of such underlying processes and therefore contribute the understanding of flood risk. Using such a modelling approach to determine potentially flood-affected areas and damages requires a complex coupling between several models operating at different spatial and temporal scales. Although the isolated parts of the single modelling components are well established and commonly used in the literature, a full coupling including a mesoscale meteorological model driven by a global circulation one, a hydrologic model, a hydrodynamic model and a flood impact and loss model has not been reported so far. In the present study, we tackle the application of such a coupled model chain in terms of computational resources, scale effects, and model performance. From a technical point of view, results show the general applicability of such a coupled model, as well as good model performance. From a practical point of view, such an approach enables the prediction of flood-induced damages, although some future challenges have been identified.

  11. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  12. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  13. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  14. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  15. The impact of mesoscale convective systems on global precipitation: A modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo

    2017-04-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.

  16. Do High Dynamic Range threatments improve the results of Structure from Motion approaches in Geomorphology?

    NASA Astrophysics Data System (ADS)

    Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando

    2015-04-01

    In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).

  17. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  18. Process-based soil erodibility estimation for empirical water erosion models

    USDA-ARS?s Scientific Manuscript database

    A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...

  19. Algorithms used in the Airborne Lidar Processing System (ALPS)

    USGS Publications Warehouse

    Nagle, David B.; Wright, C. Wayne

    2016-05-23

    The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.

  20. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  1. Application of System Operational Effectiveness Methodology to Space Launch Vehicle Development and Operations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Kelley, Gary W.

    2012-01-01

    The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.

  2. Adaptation, Learning, and the Art of War: A Cybernetic Perspective

    DTIC Science & Technology

    2014-05-14

    William Ross Ashby and contemporary cybernetic thought, the study modeled the adaptive systems as control loops and the processes of adaptive systems...as a Markov process . Using this model , the study concluded that systems would return to the same relative equilibrium point, expressed in terms of...uncertain and ever-changing environment. Drawing from the works of William Ross Ashby and contemporary cybernetic thought, the study modeled the adaptive

  3. Effect of river flow fluctuations on riparian vegetation dynamics: Processes and models

    NASA Astrophysics Data System (ADS)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca

    2017-12-01

    Several decades of field observations, laboratory experiments and mathematical modelings have demonstrated that the riparian environment is a disturbance-driven ecosystem, and that the main source of disturbance is river flow fluctuations. The focus of the present work has been on the key role that flow fluctuations play in determining the abundance, zonation and species composition of patches of riparian vegetation. To this aim, the scientific literature on the subject, over the last 20 years, has been reviewed. First, the most relevant ecological, morphological and chemical mechanisms induced by river flow fluctuations are described from a process-based perspective. The role of flow variability is discussed for the processes that affect the recruitment of vegetation, the vegetation during its adult life, and the morphological and nutrient dynamics occurring in the riparian habitat. Particular emphasis has been given to studies that were aimed at quantifying the effect of these processes on vegetation, and at linking them to the statistical characteristics of the river hydrology. Second, the advances made, from a modeling point of view, have been considered and discussed. The main models that have been developed to describe the dynamics of riparian vegetation have been presented. Different modeling approaches have been compared, and the corresponding advantages and drawbacks have been pointed out. Finally, attention has been paid to identifying the processes considered by the models, and these processes have been compared with those that have actually been observed or measured in field/laboratory studies.

  4. nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab

    PubMed Central

    Cajigas, I.; Malik, W.Q.; Brown, E.N.

    2012-01-01

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419

  5. Point-point and point-line moving-window correlation spectroscopy and its applications

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  6. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  7. Process compensated resonance testing modeling for damage evolution and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Biedermann, Eric; Heffernan, Julieanne; Mayes, Alexander; Gatewood, Garrett; Jauriqui, Leanne; Goodlet, Brent; Pollock, Tresa; Torbet, Chris; Aldrin, John C.; Mazdiyasni, Siamack

    2017-02-01

    Process Compensated Resonance Testing (PCRT) is a nondestructive evaluation (NDE) method based on the fundamentals of Resonant Ultrasound Spectroscopy (RUS). PCRT is used for material characterization, defect detection, process control and life monitoring of critical gas turbine engine and aircraft components. Forward modeling and model inversion for PCRT have the potential to greatly increase the method's material characterization capability while reducing its dependence on compiling a large population of physical resonance measurements. This paper presents progress on forward modeling studies for damage mechanisms and defects in common to structural materials for gas turbine engines. Finite element method (FEM) models of single crystal (SX) Ni-based superalloy Mar-M247 dog bones and Ti-6Al-4V cylindrical bars were created, and FEM modal analyses calculated the resonance frequencies for the samples in their baseline condition. Then the frequency effects of superalloy creep (high-temperature plastic deformation) and macroscopic texture (preferred crystallographic orientation of grains detrimental to fatigue properties) were evaluated. A PCRT sorting module for creep damage in Mar-M247 was trained with a virtual database made entirely of modeled design points. The sorting module demonstrated successful discrimination of design points with as little as 1% creep strain in the gauge section from a population of acceptable design points with a range of material and geometric variation. The resonance frequency effects of macro-scale texture in Ti-6Al-4V were quantified with forward models of cylinder samples. FEM-based model inversion was demonstrated for Mar-M247 bulk material properties and variations in crystallographic orientation. PCRT uncertainty quantification (UQ) was performed using Monte Carlo studies for Mar-M247 that quantified the overall uncertainty in resonance frequencies resulting from coupled variation in geometry, material properties, crystallographic orientation and creep damage. A model calibration process was also developed that evaluates inversion fitting to differences from a designated reference sample rather than absolute property values, yielding a reduction in fit error.

  8. Strategy for Texture Management in Metals Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  9. Strategy for Texture Management in Metals Additive Manufacturing

    DOE PAGES

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...

    2017-01-31

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  10. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  11. Global-to-local, shape-based, real and virtual landmarks for shape modeling by recursive boundary subdivision

    NASA Astrophysics Data System (ADS)

    Rueda, Sylvia; Udupa, Jayaram K.

    2011-03-01

    Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.

  12. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments.

  13. A delta-rule model of numerical and non-numerical order processing.

    PubMed

    Verguts, Tom; Van Opstal, Filip

    2014-06-01

    Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  15. Strategic Industry Attack.

    DTIC Science & Technology

    1980-01-15

    Code B364078464 V99QAXNH30303 H2590D. IS KEY WORDS fCo.. e.1 Odn Od It -C.eWV WHO Idnlif b 61-k n 0ber) Strategic Targeting Copper Industry INDATAK 20...develop, debug and test an industrial simulation model (INDATAK) using the LOGATAK model as a point of departure. The copper processing industry is...significant processes in the copper industry, including the transportation network connecting the processing elements, have been formatted for use in

  16. Relation of structural and vibratory kinematics of the vocal folds to two acoustic measures of breathy voice based on computational modeling

    PubMed Central

    Samlan, Robin A.; Story, Brad H.

    2011-01-01

    Purpose To relate vocal fold structure and kinematics to two acoustic measures: cepstral peak prominence (CPP) and the amplitude of the first harmonic relative to the second (H1-H2). Method A computational, kinematic model of the medial surfaces of the vocal folds was used to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: degree of vocal fold adduction, surface bulging, vibratory nodal point, and supraglottal constriction. CPP and H1-H2 were measured from simulated glottal area, glottal flow and acoustic waveforms and related to the underlying vocal fold kinematics. Results CPP decreased with increased separation of the vocal processes, whereas the nodal point location had little effect. H1-H2 increased as a function of separation of the vocal processes in the range of 1–1.5 mm and decreased with separation > 1.5 mm. Conclusions CPP is generally a function of vocal process separation. H1*-H2* will increase or decrease with vocal process separation based on vocal fold shape, pivot point for the rotational mode, and supraglottal vocal tract shape, limiting its utility as an indicator of breathy voice. Future work will relate the perception of breathiness to vocal fold kinematics and acoustic measures. PMID:21498582

  17. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  18. Model-Based Engineering Design for Trade Space Exploration throughout the Design Cycle

    NASA Technical Reports Server (NTRS)

    Lamassoure, Elisabeth S.; Wall, Stephen D.; Easter, Robert W.

    2004-01-01

    This paper presents ongoing work to standardize model-based system engineering as a complement to point design development in the conceptual design phase of deep space missions. It summarizes two first steps towards practical application of this capability within the framework of concurrent engineering design teams and their customers. The first step is standard generation of system sensitivities models as the output of concurrent engineering design sessions, representing the local trade space around a point design. A review of the chosen model development process, and the results of three case study examples, demonstrate that a simple update to the concurrent engineering design process can easily capture sensitivities to key requirements. It can serve as a valuable tool to analyze design drivers and uncover breakpoints in the design. The second step is development of rough-order- of-magnitude, broad-range-of-validity design models for rapid exploration of the trade space, before selection of a point design. At least one case study demonstrated the feasibility to generate such models in a concurrent engineering session. The experiment indicated that such a capability could yield valid system-level conclusions for a trade space composed of understood elements. Ongoing efforts are assessing the practicality of developing end-to-end system-level design models for use before even convening the first concurrent engineering session, starting with modeling an end-to-end Mars architecture.

  19. Photogrammetric Recording and Reconstruction of Town Scale Models - the Case of the Plan-Relief of Strasbourg

    NASA Astrophysics Data System (ADS)

    Macher, H.; Grussenmeyer, P.; Landes, T.; Halin, G.; Chevrier, C.; Huyghe, O.

    2017-08-01

    The French collection of Plan-Reliefs, scale models of fortified towns, constitutes a precious testimony of the history of France. The aim of the URBANIA project is the valorisation and the diffusion of this Heritage through the creation of virtual models. The town scale model of Strasbourg at 1/600 currently exhibited in the Historical Museum of Strasbourg was selected as a case study. In this paper, the photogrammetric recording of this scale model is first presented. The acquisition protocol as well as the data post-processing are detailed. Then, the modelling of the city and more specially building blocks is investigated. Based on point clouds of the scale model, the extraction of roof elements is considered. It deals first with the segmentation of the point cloud into building blocks. Then, for each block, points belonging to roofs are identified and the extraction of chimney point clouds as well as roof ridges and roof planes is performed. Finally, the 3D parametric modelling of the building blocks is studied by considering roof polygons and polylines describing chimneys as input. In a future works section, the semantically enrichment and the potential usage scenarios of the scale model are envisaged.

  20. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  1. Putting mechanisms into crop production models

    USDA-ARS?s Scientific Manuscript database

    Crop simulation models dynamically predict processes of carbon, nitrogen, and water balance on daily or hourly time-steps to the point of predicting yield and production at crop maturity. A brief history of these models is reviewed, and their level of mechanism for assimilation and respiration, ran...

  2. Capturing interactions between nitrogen and hydrological cycles under historical climate and land use: Susquehanna watershed analysis with the GFDL land model LM3-TAN

    USGS Publications Warehouse

    Lee, M.; Malyshev, S.; Shevliakova, E.; Milly, Paul C. D.; Jaffé, P. R.

    2014-01-01

    We developed a process model LM3-TAN to assess the combined effects of direct human influences and climate change on terrestrial and aquatic nitrogen (TAN) cycling. The model was developed by expanding NOAA's Geophysical Fluid Dynamics Laboratory land model LM3V-N of coupled terrestrial carbon and nitrogen (C-N) cycling and including new N cycling processes and inputs such as a soil denitrification, point N sources to streams (i.e., sewage), and stream transport and microbial processes. Because the model integrates ecological, hydrological, and biogeochemical processes, it captures key controls of the transport and fate of N in the vegetation–soil–river system in a comprehensive and consistent framework which is responsive to climatic variations and land-use changes. We applied the model at 1/8° resolution for a study of the Susquehanna River Basin. We simulated with LM3-TAN stream dissolved organic-N, ammonium-N, and nitrate-N loads throughout the river network, and we evaluated the modeled loads for 1986–2005 using data from 16 monitoring stations as well as a reported budget for the entire basin. By accounting for interannual hydrologic variability, the model was able to capture interannual variations of stream N loadings. While the model was calibrated with the stream N loads only at the last downstream Susquehanna River Basin Commission station Marietta (40°02' N, 76°32' W), it captured the N loads well at multiple locations within the basin with different climate regimes, land-use types, and associated N sources and transformations in the sub-basins. Furthermore, the calculated and previously reported N budgets agreed well at the level of the whole Susquehanna watershed. Here we illustrate how point and non-point N sources contributing to the various ecosystems are stored, lost, and exported via the river. Local analysis of six sub-basins showed combined effects of land use and climate on soil denitrification rates, with the highest rates in the Lower Susquehanna Sub-Basin (extensive agriculture; Atlantic coastal climate) and the lowest rates in the West Branch Susquehanna Sub-Basin (mostly forest; Great Lakes and Midwest climate). In the re-growing secondary forests, most of the N from non-point sources was stored in the vegetation and soil, but in the agricultural lands most N inputs were removed by soil denitrification, indicating that anthropogenic N applications could drive substantial increase of N2O emission, an intermediate of the denitrification process.

  3. The Influence of Consumer Goals and Marketing Activities on Product Bundling

    NASA Astrophysics Data System (ADS)

    Haijun, Wang

    Upon entering a store, consumers are faced with the questions of whether to buy, what to buy, and how much to buy. Consumers include products from different categories in their decision process. Product categories can be related in different ways. Product bundling is a process that involves the choice of at least two non-substitutable items. In this research, the consumers' explicit product bundling activity at the point of sale is focused. We focuses on the retailers' perspective and therefore leaves out consumers' brand choice decisions, concentrating on purchase incidence and quantity. At the base of the current model of the exist researches, we integrate behavioural choice analysis and predictive choice modelling through the underlying behavioural models, called random utility maximization (RUM) models. The methodological contribution of this research lies therein to combine a nested logit choice model with a latent variable factor model. We point out several limitations for both theory and practice at the end.

  4. Discovering Implicit Networks from Point Process Data

    DTIC Science & Technology

    2013-08-03

    Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 SOCIAL NETWORK ANALYSIS Szell et al, Nature 2012 Saturday, August 3, 13 (a) Adjacency...processes: ‣ Seismology ‣ Epidemiology ‣ Economics ‣ Modeling dependence is challenging - “beyond Poisson” ‣ Strauss and Gibbs Processes ‣ Determinantal

  5. Tailoring point counts for inference about avian density: dealing with nondetection and availability

    USGS Publications Warehouse

    Johnson, Fred A.; Dorazio, Robert M.; Castellón, Traci D.; Martin, Julien; Garcia, Jay O.; Nichols, James D.

    2014-01-01

    Point counts are commonly used for bird surveys, but interpretation is ambiguous unless there is an accounting for the imperfect detection of individuals. We show how repeated point counts, supplemented by observation distances, can account for two aspects of the counting process: (1) detection of birds conditional on being available for observation and (2) the availability of birds for detection given presence. We propose a hierarchical model that permits the radius in which birds are available for detection to vary with forest stand age (or other relevant habitat features), so that the number of birds available at each location is described by a Poisson-gamma mixture. Conditional on availability, the number of birds detected at each location is modeled by a beta-binomial distribution. We fit this model to repeated point count data of Florida scrub-jays and found evidence that the area in which birds were available for detection decreased with increasing stand age. Estimated density was 0.083 (95%CI: 0.060–0.113) scrub-jays/ha. Point counts of birds have a number of appealing features. Based on our findings, however, an accounting for both components of the counting process may be necessary to ensure that abundance estimates are comparable across time and space. Our approach could easily be adapted to other species and habitats.

  6. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  7. Pattern analysis of community health center location in Surabaya using spatial Poisson point process

    NASA Astrophysics Data System (ADS)

    Kusumaningrum, Choriah Margareta; Iriawan, Nur; Winahju, Wiwiek Setya

    2017-11-01

    Community health center (puskesmas) is one of the closest health service facilities for the community, which provide healthcare for population on sub-district level as one of the government-mandated community health clinics located across Indonesia. The increasing number of this puskesmas does not directly comply the fulfillment of basic health services needed in such region. Ideally, a puskesmas has to cover up to maximum 30,000 people. The number of puskesmas in Surabaya indicates an unbalance spread in all of the area. This research aims to analyze the spread of puskesmas in Surabaya using spatial Poisson point process model in order to get the effective location of Surabaya's puskesmas which based on their location. The results of the analysis showed that the distribution pattern of puskesmas in Surabaya is non-homogeneous Poisson process and is approched by mixture Poisson model. Based on the estimated model obtained by using Bayesian mixture model couple with MCMC process, some characteristics of each puskesmas have no significant influence as factors to decide the addition of health center in such location. Some factors related to the areas of sub-districts have to be considered as covariate to make a decision adding the puskesmas in Surabaya.

  8. Current aspects of Salmonella contamination in the US poultry production chain and the potential application of risk strategies in understanding emerging hazards.

    PubMed

    Rajan, Kalavathy; Shi, Zhaohao; Ricke, Steven C

    2017-05-01

    One of the leading causes of foodborne illness in poultry products is Salmonella enterica. Salmonella hazards in poultry may be estimated and possible control methods modeled and evaluated through the use of quantitative microbiological risk assessment (QMRA) models and tools. From farm to table, there are many possible routes of Salmonella dissemination and contamination in poultry. From the time chicks are hatched through growth, transportation, processing, storage, preparation, and finally consumption, the product could be contaminated through exposure to different materials and sources. Examination of each step of the process is necessary as well as an examination of the overall picture to create effective countermeasures against contamination and prevent disease. QMRA simulation models can use either point estimates or probability distributions to examine variables such as Salmonella concentrations at retail or at any given point of processing to gain insight on the chance of illness due to Salmonella ingestion. For modeling Salmonella risk in poultry, it is important to look at variables such as Salmonella transfer and cross contamination during processing. QMRA results may be useful for the identification and control of critical sources of Salmonella contamination.

  9. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.

    PubMed

    Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel

    2016-05-01

    Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.

  10. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  11. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  12. A stochastic model for eye movements during fixation on a stationary target.

    NASA Technical Reports Server (NTRS)

    Vasudevan, R.; Phatak, A. V.; Smith, J. D.

    1971-01-01

    A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.

  13. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  14. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  15. Insights into mortality patterns and causes of death through a process point of view model

    PubMed Central

    Anderson, James J.; Li, Ting; Sharrow, David J.

    2016-01-01

    Process point of view models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process point of view, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the 20th century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed. PMID:27885527

  16. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  17. Effect of pH and pulsed electric field process parameters on the aflatoxin reduction in model system using response surface methodology: Effect of pH and PEF on Aflatoxin Reduction.

    PubMed

    Vijayalakshmi, Subramanian; Nadanasabhapathi, Shanmugam; Kumar, Ranganathan; Sunny Kumar, S

    2018-03-01

    The presence of aflatoxin, a carcinogenic and toxigenic secondary metabolite produced by Aspergillus species, in food matrix has been a major worldwide problem for years now. Food processing methods such as roasting, extrusion, etc. have been employed for effective destruction of aflatoxins, which are known for their thermo-stable nature. The high temperature treatment, adversely affects the nutritive and other quality attributes of the food, leading to the necessity of application of non-thermal processing techniques such as ultrasonication, gamma irradiation, high pressure processing, pulsed electric field (PEF), etc. The present study was focused on analysing the efficacy of the PEF process in the reduction of the toxin content, which was subsequently quantified using HPLC. The process parameters of different pH model system (potato dextrose agar) artificially spiked with aflatoxin mix standard was optimized using the response surface methodology. The optimization of PEF process effects on the responses aflatoxin B1 and total aflatoxin reduction (%) by pH (4-10), pulse width (10-26 µs) and output voltage (20-65%), fitted 2FI model and quadratic model respectively. The response surface plots obtained for the processes were of saddle point type, with the absence of minimum or maximum response at the centre point. The implemented numerical optimization showed that the predicted and actual values were similar, proving the adequacy of the fitted models and also proved the possible application of PEF in toxin reduction.

  18. Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.

    PubMed

    Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran

    2018-06-15

    Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  20. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  1. [On-line processing mechanisms in text comprehension: a theoretical review on constructing situation models].

    PubMed

    Iseki, Ryuta

    2004-12-01

    This article reviewed research on construction of situation models during reading. To position variety of research in overall process appropriately, an unitary framework was devised in terms of three theories for on-line processing: resonance process, event-indexing model, and constructionist theory. Resonance process was treated as a basic activation mechanism in the framework. Event-indexing model was regarded as a screening system which selected and encoded activated information in situation models along with situational dimensions. Constructionist theory was considered to have a supervisory role based on coherence and explanation. From a view of the unitary framework, some problems concerning each theory were examined and possible interpretations were given. Finally, it was pointed out that there were little theoretical arguments on associative processing at global level and encoding text- and inference-information into long-term memory.

  2. OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation

    NASA Astrophysics Data System (ADS)

    Empereur-Mot, Luc; Villemin, Thierry

    2003-03-01

    Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.

  3. An Evaluation of Alternatives for Processing of Administrative Pay Vouchers: A Simulation Approach.

    DTIC Science & Technology

    1982-09-01

    Finance Travel Voucher Q-GERT Productivity Personnel Forecasts Simulation Model 20. ABSTRACT (Continue on reverse side if necessary end Jdentfly by...Finance Office (ACF) has devised a Point System for use in determining the productivity of the ACF Travel Section (ACFTT). This Point System sets values...5 to 5+) to be assigned to incoming travel vouchers based on voucher complexity. This research had set objectives of (1) building an ACFTT model that

  4. An Improved Model of Nonuniform Coleochaete Cell Division.

    PubMed

    Wang, Yuandi; Cong, Jinyu

    2016-08-01

    Cell division is a key biological process in which cells divide forming new daughter cells. In the present study, we investigate continuously how a Coleochaete cell divides by introducing a modified differential equation model in parametric equation form. We discuss both the influence of "dead" cells and the effects of various end-points on the formation of the new cells' boundaries. We find that the boundary condition on the free end-point is different from that on the fixed end-point; the former has a direction perpendicular to the surface. It is also shown that the outer boundaries of new cells are arc-shaped. The numerical experiments and theoretical analyses for this model to construct the outer boundary are given.

  5. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  6. Distinguishing synchronous and time-varying synergies using point process interval statistics: motor primitives in frog and rat

    PubMed Central

    Hart, Corey B.; Giszter, Simon F.

    2013-01-01

    We present and apply a method that uses point process statistics to discriminate the forms of synergies in motor pattern data, prior to explicit synergy extraction. The method uses electromyogram (EMG) pulse peak timing or onset timing. Peak timing is preferable in complex patterns where pulse onsets may be overlapping. An interval statistic derived from the point processes of EMG peak timings distinguishes time-varying synergies from synchronous synergies (SS). Model data shows that the statistic is robust for most conditions. Its application to both frog hindlimb EMG and rat locomotion hindlimb EMG show data from these preparations is clearly most consistent with synchronous synergy models (p < 0.001). Additional direct tests of pulse and interval relations in frog data further bolster the support for synchronous synergy mechanisms in these data. Our method and analyses support separated control of rhythm and pattern of motor primitives, with the low level execution primitives comprising pulsed SS in both frog and rat, and both episodic and rhythmic behaviors. PMID:23675341

  7. Modelling of point and diffuse pollution: application of the Moneris model in the Ipojuca river basin, Pernambuco State, Brazil.

    PubMed

    de Lima Barros, Alessandra Maciel; do Carmo Sobral, Maria; Gunkel, Günter

    2013-01-01

    Emissions of pollutants and nutrients are causing several problems in aquatic ecosystems, and in general an excess of nutrients, specifically nitrogen and phosphorus, is responsible for the eutrophication process in water bodies. In most developed countries, more attention is given to diffuse pollution because problems with point pollution have already been solved. In many non-developed countries basic data for point and diffuse pollution are not available. The focus of the presented studies is to quantify nutrient emissions from point and diffuse sources in the Ipojuca river basin, Pernambuco State, Brazil, using the Moneris model (Modelling Nutrient Emissions in River Systems). This model has been developed in Germany and has already been implemented in more than 600 river basins. The model is mainly based on river flow, water quality and geographical information system data. According to the Moneris model results, untreated domestic sewage is the major source of nutrients in the Ipojuca river basin. The Moneris model has shown itself to be a useful tool that allows the identification and quantification of point and diffuse nutrient sources, thus enabling the adoption of measures to reduce them. The Moneris model, conducted for the first time in a tropical river basin with intermittent flow, can be used as a reference for implementation in other watersheds.

  8. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  9. A Model of Small Group Facilitator Competencies

    ERIC Educational Resources Information Center

    Kolb, Judith A.; Jin, Sungmi; Song, Ji Hoon

    2008-01-01

    This study used small group theory, quantitative and qualitative data collected from experienced practicing facilitators at three points of time, and a building block process of collection, analysis, further collection, and consolidation to develop a model of small group facilitator competencies. The proposed model has five components:…

  10. Can the History of Science Contribute to Modelling in Physics Teaching?

    NASA Astrophysics Data System (ADS)

    Machado, Juliana; Braga, Marco Antônio Barbosa

    2016-10-01

    A characterization of the modelling process in science is proposed for science education, based on Mario Bunge's ideas about the construction of models in science. Galileo's Dialogues are analysed as a potentially fruitful starting point to implement strategies aimed at modelling in the classroom in the light of that proposal. It is argued that a modelling process for science education can be conceived as the evolution from phenomenological approaches towards more representational ones, emphasizing the role of abstraction and idealization in model construction. The shift of reference of theories—from sensible objects to conceptual objects—and the black-box models construction process, which are both explicitly presented features in Galileo's Dialogues, are indicated as highly relevant aspects for modelling in science education.

  11. The Impact of Mutation and Gene Conversion on the Local Diversification of Antigen Genes in African Trypanosomes

    PubMed Central

    Gjini, Erida; Haydon, Daniel T.; Barry, J. David; Cobbold, Christina A.

    2012-01-01

    Patterns of genetic diversity in parasite antigen gene families hold important information about their potential to generate antigenic variation within and between hosts. The evolution of such gene families is typically driven by gene duplication, followed by point mutation and gene conversion. There is great interest in estimating the rates of these processes from molecular sequences for understanding the evolution of the pathogen and its significance for infection processes. In this study, a series of models are constructed to investigate hypotheses about the nucleotide diversity patterns between closely related gene sequences from the antigen gene archive of the African trypanosome, the protozoan parasite causative of human sleeping sickness in Equatorial Africa. We use a hidden Markov model approach to identify two scales of diversification: clustering of sequence mismatches, a putative indicator of gene conversion events with other lower-identity donor genes in the archive, and at a sparser scale, isolated mismatches, likely arising from independent point mutations. In addition to quantifying the respective probabilities of occurrence of these two processes, our approach yields estimates for the gene conversion tract length distribution and the average diversity contributed locally by conversion events. Model fitting is conducted using a Bayesian framework. We find that diversifying gene conversion events with lower-identity partners occur at least five times less frequently than point mutations on variant surface glycoprotein (VSG) pairs, and the average imported conversion tract is between 14 and 25 nucleotides long. However, because of the high diversity introduced by gene conversion, the two processes have almost equal impact on the per-nucleotide rate of sequence diversification between VSG subfamily members. We are able to disentangle the most likely locations of point mutations and conversions on each aligned gene pair. PMID:22735079

  12. Toward the Darwinian transition: Switching between distributed and speciated states in a simple model of early life.

    PubMed

    Arnoldt, Hinrich; Strogatz, Steven H; Timme, Marc

    2015-01-01

    It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for stochastic processes potentially contributing to this hypothesized "Darwinian transition." The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and essentially contributed to the emergence of vertical descent and the first well-defined species in early evolution. A systematic nonlinear analysis of the stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable direction out of early collective evolution, potentially enabling the start of individuality and vertical Darwinian evolution.

  13. A two-phase Poisson process model and its application to analysis of cancer mortality among A-bomb survivors.

    PubMed

    Ohtaki, Megu; Tonda, Tetsuji; Aihara, Kazuyuki

    2015-10-01

    We consider a two-phase Poisson process model where only early successive transitions are assumed to be sensitive to exposure. In the case where intensity transitions are low, we derive analytically an approximate formula for the distribution of time to event for the excess hazard ratio (EHR) due to a single point exposure. The formula for EHR is a polynomial in exposure dose. Since the formula for EHR contains no unknown parameters except for the number of total stages, number of exposure-sensitive stages, and a coefficient of exposure effect, it is applicable easily under a variety of situations where there exists a possible latency time from a single point exposure to occurrence of event. Based on the multistage hypothesis of cancer, we formulate a radiation carcinogenesis model in which only some early consecutive stages of the process are sensitive to exposure, whereas later stages are not affected. An illustrative analysis using the proposed model is given for cancer mortality among A-bomb survivors. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  15. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  16. Proactive action preparation: seeing action preparation as a continuous and proactive process.

    PubMed

    Pezzulo, Giovanni; Ognibene, Dimitri

    2012-07-01

    In this paper, we aim to elucidate the processes that occur during action preparation from both a conceptual and a computational point of view. We first introduce the traditional, serial model of goal-directed action and discuss from a computational viewpoint its subprocesses occurring during the two phases of covert action preparation and overt motor control. Then, we discuss recent evidence indicating that these subprocesses are highly intertwined at representational and neural levels, which undermines the validity of the serial model and points instead to a parallel model of action specification and selection. Within the parallel view, we analyze the case of delayed choice, arguing that action preparation can be proactive, and preparatory processes can take place even before decisions are made. Specifically, we discuss how prior knowledge and prospective abilities can be used to maximize utility even before deciding what to do. To support our view, we present a computational implementation of (an approximated version of) proactive action preparation, showing its advantages in a simulated tennis-like scenario.

  17. Some Applications of the Model of the Partion Points on a One Dimensional Lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1996-02-01

    We have shown that by using a model of gas of partition points on a one-dimensional lattice, we can find some results about the saturation curves for enzyme kinetics or the average domain-size, which we have obtained before by using a correlated walks' theory or a probabilistic (combinatoric) way. We have studied, using the same model and the same technique, the denaturation process, i.e., the breaking of the hydrogen bonds connecting the two strands, under treatment by heat. Also, we have discussed, without entering in details, the problem related to the spread of an infections disease and the stochastic model of partition points. We think that this model, being simple and mathematically transparent, can be advantageous for the other theoratical investigations in chemistry or modern biology. PACS NOS.: 05.50. + q; 05.70.Ce; 64.10.+h; 87.10. +e; 87.15.Rn

  18. Multiple tipping points and optimal repairing in interacting networks

    PubMed Central

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-01-01

    Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803

  19. Highway extraction from high resolution aerial photography using a geometric active contour model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.

  20. Groundwater recharge from point to catchment scale

    NASA Astrophysics Data System (ADS)

    Leterme, Bertrand; Di Ciacca, Antoine; Laloy, Eric; Jacques, Diederik

    2016-04-01

    Accurate estimation of groundwater recharge is a challenging task as only a few devices (if any) can measure it directly. In this study, we discuss how groundwater recharge can be calculated at different temporal and spatial scales in the Kleine Nete catchment (Belgium). A small monitoring network is being installed, that is aimed to monitor the changes in dominant processes and to address data availability as one goes from the point to the catchment scale. At the point scale, groundwater recharge is estimated using inversion of soil moisture and/or water potential data and stable isotope concentrations (Koeniger et al. 2015). At the plot scale, it is proposed to monitor the discharge of a small drainage ditch in order to calculate the field groundwater recharge. Electrical conductivity measurements are necessary to separate shallow from deeper groundwater contribution to the ditch discharge (see Di Ciacca et al. poster in session HS8.3.4). At this scale, two or three-dimensional process-based vadose zone models will be used to model subsurface flow. At the catchment scale though, using a mechanistic, process-based model to estimate groundwater recharge is debatable (because of, e.g., the presence of numerous drainage ditches, mixed land use pixels, etc.). We therefore investigate to which extent various types of surrogate models can be used to make the necessary upscaling from the plot scale to the scale of the whole Kleine Nete catchment. Ref. Koeniger P, Gaj M, Beyer M, Himmelsbach T (2015) Review on soil water isotope based groundwater recharge estimations. Hydrological Processes, DOI: 10.1002/hyp.10775

  1. Modeling nuclear processes by Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox softwaremore » that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.« less

  2. Investigating the 3-D Subduction Initiation Processes at Transform Faults and Passive Margins

    NASA Astrophysics Data System (ADS)

    Peng, H.; Leng, W.

    2017-12-01

    Studying the processes of subduction initiation is a key for understanding the Wilson cycle and improving the theory of plate tectonics. Previous studies investigated subduction initiation with geological synthesis and geodynamic modeling methods, discovering that subduction intends to initiate at the transform faults close to oceanic arcs, and that its evolutionary processes and surface volcanic expressions are controlled by plate strength. However, these studies are mainly conducted with 2-D models, which cannot deal with lateral heterogeneities of crustal thickness and strength along the plate interfaces. Here we extend the 2-D model to a 3-D parallel subduction model with high computational efficiency. With the new model, we study the dynamic controlling factors, morphology evolutionary processes and surface expressions for subduction initiation with lateral heterogeneities of material properties along transform faults and passive margins. We find that lateral lithospheric heterogeneities control the starting point of the subduction initiation along the newly formed trenches and the propagation speed for the trench formation. New subduction tends to firstly initiate at the property changing point along the transform faults or passive margins. Such finds may be applied to explain the formation process of the Izu-Bonin-Mariana (IBM) subduction zone in the western Pacific and the Scotia subduction zone at the south end of the South America. Our results enhance our understanding for the formation of new trenches and help to provide geodynamic modeling explanations for the observed remnant slabs in the upper mantle and the surface volcanic expressions.

  3. An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control

    DTIC Science & Technology

    1989-12-01

    Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle

  4. Codimension-Two Bifurcation, Chaos and Control in a Discrete-Time Information Diffusion Model

    NASA Astrophysics Data System (ADS)

    Ren, Jingli; Yu, Liping

    2016-12-01

    In this paper, we present a discrete model to illustrate how two pieces of information interact with online social networks and investigate the dynamics of discrete-time information diffusion model in three types: reverse type, intervention type and mutualistic type. It is found that the model has orbits with period 2, 4, 6, 8, 12, 16, 20, 30, quasiperiodic orbit, and undergoes heteroclinic bifurcation near 1:2 point, a homoclinic structure near 1:3 resonance point and an invariant cycle bifurcated by period 4 orbit near 1:4 resonance point. Moreover, in order to regulate information diffusion process and information security, we give two control strategies, the hybrid control method and the feedback controller of polynomial functions, to control chaos, flip bifurcation, 1:2, 1:3 and 1:4 resonances, respectively, in the two-dimensional discrete system.

  5. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    NASA Astrophysics Data System (ADS)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  6. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  7. Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies

    NASA Astrophysics Data System (ADS)

    Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.

    2017-12-01

    Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.

  8. Quantum information processing by a continuous Maxwell demon

    NASA Astrophysics Data System (ADS)

    Stevens, Josey; Deffner, Sebastian

    Quantum computing is believed to be fundamentally superior to classical computing; however quantifying the specific thermodynamic advantage has been elusive. Experimentally motivated, we generalize previous minimal models of discrete demons to continuous state space. Analyzing our model allows one to quantify the thermodynamic resources necessary to process quantum information. By further invoking the semi-classical limit we compare the quantum demon with its classical analogue. Finally, this model also serves as a starting point to study open quantum systems.

  9. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  10. A user-friendly modified pore-solid fractal model

    PubMed Central

    Ding, Dian-yuan; Zhao, Ying; Feng, Hao; Si, Bing-cheng; Hill, Robert Lee

    2016-01-01

    The primary objective of this study was to evaluate a range of calculation points on water retention curves (WRC) instead of the singularity point at air-entry suction in the pore-solid fractal (PSF) model, which additionally considered the hysteresis effect based on the PSF theory. The modified pore-solid fractal (M-PSF) model was tested using 26 soil samples from Yangling on the Loess Plateau in China and 54 soil samples from the Unsaturated Soil Hydraulic Database. The derivation results showed that the M-PSF model is user-friendly and flexible for a wide range of calculation point options. This model theoretically describes the primary differences between the soil moisture desorption and the adsorption processes by the fractal dimensions. The M-PSF model demonstrated good performance particularly at the calculation points corresponding to the suctions from 100 cm to 1000 cm. Furthermore, the M-PSF model, used the fractal dimension of the particle size distribution, exhibited an accepted performance of WRC predictions for different textured soils when the suction values were ≥100 cm. To fully understand the function of hysteresis in the PSF theory, the role of allowable and accessible pores must be examined. PMID:27996013

  11. Anatomic modeling using 3D printing: quality assurance and optimization.

    PubMed

    Leng, Shuai; McGee, Kiaran; Morris, Jonathan; Alexander, Amy; Kuhlmann, Joel; Vrieze, Thomas; McCollough, Cynthia H; Matsumoto, Jane

    2017-01-01

    The purpose of this study is to provide a framework for the development of a quality assurance (QA) program for use in medical 3D printing applications. An interdisciplinary QA team was built with expertise from all aspects of 3D printing. A systematic QA approach was established to assess the accuracy and precision of each step during the 3D printing process, including: image data acquisition, segmentation and processing, and 3D printing and cleaning. Validation of printed models was performed by qualitative inspection and quantitative measurement. The latter was achieved by scanning the printed model with a high resolution CT scanner to obtain images of the printed model, which were registered to the original patient images and the distance between them was calculated on a point-by-point basis. A phantom-based QA process, with two QA phantoms, was also developed. The phantoms went through the same 3D printing process as that of the patient models to generate printed QA models. Physical measurement, fit tests, and image based measurements were performed to compare the printed 3D model to the original QA phantom, with its known size and shape, providing an end-to-end assessment of errors involved in the complete 3D printing process. Measured differences between the printed model and the original QA phantom ranged from -0.32 mm to 0.13 mm for the line pair pattern. For a radial-ulna patient model, the mean distance between the original data set and the scanned printed model was -0.12 mm (ranging from -0.57 to 0.34 mm), with a standard deviation of 0.17 mm. A comprehensive QA process from image acquisition to completed model has been developed. Such a program is essential to ensure the required accuracy of 3D printed models for medical applications.

  12. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  13. Meshless Local Petrov-Galerkin Method for Solving Contact, Impact and Penetration Problems

    DTIC Science & Technology

    2006-11-30

    Crack Growth 3 point of view, this approach makes the full use of the ex- isting FE models to avoid any model regeneration , which is extremely high in...process, at point C, the pressure reduces to zero, but the volumet- ric strain does not go to zero due to the collapsed void volume. 2.2 Damage...lease rate to go beyond the critical strain energy release rate. Thus, the micro-cracks begin to growth inside these areas. At 10 micro-seconds, these

  14. Use of refinery computer model to predict fuel production

    NASA Technical Reports Server (NTRS)

    Flores, F. J.

    1979-01-01

    Several factors (crudes, refinery operation and specifications) that affect yields and properties of broad specification jet fuel were parameterized using the refinery simulation model which can simulate different types of refineries were used to make the calculations. Results obtained from the program are used to correlate yield as a function of final boiling point, hydrogen content and freezing point for jet fuels produced in two refinery configurations, each one processing a different crude mix. Refinery performances are also compared in terms of energy consumption.

  15. Topographic lidar survey of the Chandeleur Islands, Louisiana, February 6, 2012

    USGS Publications Warehouse

    Guy, Kristy K.; Plant, Nathaniel G.; Bonisteel-Cormier, Jamie M.

    2014-01-01

    This Data Series Report contains lidar elevation data collected February 6, 2012, for Chandeleur Islands, Louisiana. Point cloud data in lidar data exchange format (LAS) and bare earth digital elevation models (DEMs) in ERDAS Imagine raster format (IMG) are available as downloadable files. The point cloud data—data points described in three dimensions—were processed to extract bare earth data; therefore, the point cloud data are organized into the following classes: 1– and 17–unclassified, 2–ground, 9–water, and 10–breakline proximity. Digital Aerial Solutions, LLC, (DAS) was contracted by the U.S. Geological Survey (USGS) to collect and process these data. The lidar data were acquired at a horizontal spacing (or nominal pulse spacing) of 0.5 meters (m) or less. The USGS conducted two ground surveys in small areas on the Chandeleur Islands on February 5, 2012. DAS calculated a root mean square error (RMSEz) of 0.034 m by comparing the USGS ground survey point data to triangulated irregular network (TIN) models built from the lidar elevation data. This lidar survey was conducted to document the topography and topographic change of the Chandeleur Islands. The survey supports detailed studies of Louisiana, Mississippi and Alabama barrier islands that resolve annual and episodic changes in beaches, berms and dunes associated with processes driven by storms, sea-level rise, and even human restoration activities. These lidar data are available to Federal, State and local governments, emergency-response officials, resource managers, and the general public.

  16. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets

    EPA Science Inventory

    Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at whic...

  17. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  18. Numerical Simulation of Pollutants' Transport and Fate in AN Unsteady Flow in Lower Bear River, Box Elder County, Utah

    NASA Astrophysics Data System (ADS)

    Salha, A. A.; Stevens, D. K.

    2013-12-01

    This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.

  19. Stimulus Sensitivity of a Spiking Neural Network Model

    NASA Astrophysics Data System (ADS)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  20. Upscaling Empirically Based Conceptualisations to Model Tropical Dominant Hydrological Processes for Historical Land Use Change

    NASA Astrophysics Data System (ADS)

    Toohey, R.; Boll, J.; Brooks, E.; Jones, J.

    2009-12-01

    Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as irrigation intensity and duration (e.g., quantity) increased. Upscaling our conceptual models to the watershed and regional scales, historical data (1970-2004) was used to investigate whether dominant hydrological processes changed over time due to land use change. Preliminary investigations reveal much higher runoff coefficients (<30%) at the larger watershed scales. The increase in importance of runoff at the larger geographic scales suggests an emerging process and process non-linearity between the smaller and larger scales. Upscaling is an important and useful concept when investigating catchment response using the tools of field work and/or physically distributed hydrological modeling.

  1. Climbing the ladder: capability maturity model integration level 3

    NASA Astrophysics Data System (ADS)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  2. A New Activity-Based Financial Cost Management Method

    NASA Astrophysics Data System (ADS)

    Qingge, Zhang

    The standard activity-based financial cost management model is a new model of financial cost management, which is on the basis of the standard cost system and the activity-based cost and integrates the advantages of the two. It is a new model of financial cost management with more accurate and more adequate cost information by taking the R&D expenses as the accounting starting point and after-sale service expenses as the terminal point and covering the whole producing and operating process and the whole activities chain and value chain aiming at serving the internal management and decision.

  3. Problems in mechanistic theoretical models for cell transformation by ionizing radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, A.; Holley, W.R.

    1991-10-01

    A mechanistic model based on yields of double strand breaks has been developed to determine the dose response curves for cell transformation frequencies. At its present stage the model is applicable to immortal cell lines and to various qualities (X-rays, Neon and Iron) of ionizing radiation. Presently, we have considered four types of processes which can lead to activation phenomena: (1) point mutation events on a regulatory segment of selected oncogenes, (2) inactivation of suppressor genes, through point mutation, (3) deletion of a suppressor gene by a single track, and (4) deletion of a suppressor gene by two tracks.

  4. Relation of structural and vibratory kinematics of the vocal folds to two acoustic measures of breathy voice based on computational modeling.

    PubMed

    Samlan, Robin A; Story, Brad H

    2011-10-01

    To relate vocal fold structure and kinematics to 2 acoustic measures: cepstral peak prominence (CPP) and the amplitude of the first harmonic relative to the second (H1-H2). The authors used a computational, kinematic model of the medial surfaces of the vocal folds to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: degree of vocal fold adduction, surface bulging, vibratory nodal point, and supraglottal constriction. CPP and H1-H2 were measured from simulated glottal area, glottal flow, and acoustic waveforms and were related to the underlying vocal fold kinematics. CPP decreased with increased separation of the vocal processes, whereas the nodal point location had little effect. H1-H2 increased as a function of separation of the vocal processes in the range of 1.0 mm to 1.5 mm and decreased with separation > 1.5 mm. CPP is generally a function of vocal process separation. H1*-H2* (see paragraph 6 of article text for an explanation of the asterisks) will increase or decrease with vocal process separation on the basis of vocal fold shape, pivot point for the rotational mode, and supraglottal vocal tract shape, limiting its utility as an indicator of breathy voice. Future work will relate the perception of breathiness to vocal fold kinematics and acoustic measures.

  5. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  6. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corvellec, Herve, E-mail: herve.corvellec@ism.lu.se; Bramryd, Torleif

    Highlights: Black-Right-Pointing-Pointer Swedish municipally owned waste management companies are active on political, material, technical, and commercial markets. Black-Right-Pointing-Pointer These markets differ in kind and their demands follow different logics. Black-Right-Pointing-Pointer These markets affect the public service, processing, and marketing of Swedish waste management. Black-Right-Pointing-Pointer Articulating these markets is a strategic challenge for Swedish municipally owned waste management. - Abstract: This paper describes how the business model of two leading Swedish municipally owned solid waste management companies exposes them to four different but related markets: a political market in which their legitimacy as an organization is determined; a waste-as-material market thatmore » determines their access to waste as a process input; a technical market in which these companies choose what waste processing technique to use; and a commercial market in which they market their products. Each of these markets has a logic of its own. Managing these logics and articulating the interrelationships between these markets is a key strategic challenge for these companies.« less

  8. A technique for processing of planetary images with heterogeneous characteristics for estimating geodetic parameters of celestial bodies with the example of Ganymede

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.

    2016-09-01

    The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).

  9. Creation of the Driver Fixed Heel Point (FHP) CAD Accommodation Model for Military Ground Vehicle Design

    DTIC Science & Technology

    2016-08-04

    interior surfaces and direct field of view have been added per MIL-STD- 1472G. This CAD model can be applied early in the vehicle design process to ensure... interior surfaces and direct field of view have been added per MIL-STD-1472G. This CAD model can be applied early in the vehicle design process to ensure...Accommodation Model for Military Ground Vehicle Design Paper presented at 2016 NDIA/GVSETS Conference, Aug 4, 2016 4 August 2016 UNCLASSIFIED UNCLASSIFIED

  10. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.

  11. Employee Communication during Crises: The Effects of Stress on Information Processing.

    ERIC Educational Resources Information Center

    Pincus, J. David; Acharya, Lalit

    Based on multidisciplinary research findings, this report proposes an information processing model of employees' response to highly stressful information environments arising during organizational crises. The introduction stresses the importance of management's handling crisis communication with employees skillfully. The second section points out…

  12. Automatic initialization for 3D bone registration

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Taylor, Russell H.; Fichtinger, Gabor

    2008-03-01

    In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration is manually initialized by locating the sample points close to the corresponding points on the CT model. In this paper, we present an automatic initialization method that aligns the sample points collected from the surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for all registrations and facilitates the inclusion of application-specific information into the registration process. The CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape. This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for successful registration. The standard ICP has been used for final registration of datasets.

  13. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  14. Points of View Analysis Revisited: Fitting Multidimensional Structures to Optimal Distance Components with Cluster Restrictions on the Variables.

    ERIC Educational Resources Information Center

    Meulman, Jacqueline J.; Verboon, Peter

    1993-01-01

    Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)

  15. Improving Visibility of Stereo-Radiographic Spine Reconstruction with Geometric Inferences.

    PubMed

    Kumar, Sampath; Nayak, K Prabhakar; Hareesha, K S

    2016-04-01

    Complex deformities of the spine, like scoliosis, are evaluated more precisely using stereo-radiographic 3D reconstruction techniques. Primarily, it uses six stereo-corresponding points available on the vertebral body for the 3D reconstruction of each vertebra. The wireframe structure obtained in this process has poor visualization, hence difficult to diagnose. In this paper, a novel method is proposed to improve the visibility of this wireframe structure using a deformation of a generic spine model in accordance with the 3D-reconstructed corresponding points. Then, the geometric inferences like vertebral orientations are automatically extracted from the radiographs to improve the visibility of the 3D model. Biplanar radiographs are acquired from five scoliotic subjects on a specifically designed calibration bench. The stereo-corresponding point reconstruction method is used to build six-point wireframe vertebral structures and thus the entire spine model. Using the 3D spine midline and automatically extracted vertebral orientation features, a more realistic 3D spine model is generated. To validate the method, the 3D spine model is back-projected on biplanar radiographs and the error difference is computed. Though, this difference is within the error limits available in the literature, the proposed work is simple and economical. The proposed method does not require more corresponding points and image features to improve the visibility of the model. Hence, it reduces the computational complexity. Expensive 3D digitizer and vertebral CT scan models are also excluded from this study. Thus, the visibility of stereo-corresponding point reconstruction is improved to obtain a low-cost spine model for a better diagnosis of spinal deformities.

  16. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  17. Data processing workflows from low-cost digital survey to various applications: three case studies of Chinese historic architecture

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Cao, Y. K.

    2015-08-01

    The paper focuses on the versatility of data processing workflows ranging from BIM-based survey to structural analysis and reverse modeling. In China nowadays, a large number of historic architecture are in need of restoration, reinforcement and renovation. But the architects are not prepared for the conversion from the booming AEC industry to architectural preservation. As surveyors working with architects in such projects, we have to develop efficient low-cost digital survey workflow robust to various types of architecture, and to process the captured data for architects. Although laser scanning yields high accuracy in architectural heritage documentation and the workflow is quite straightforward, the cost and portability hinder it from being used in projects where budget and efficiency are of prime concern. We integrate Structure from Motion techniques with UAV and total station in data acquisition. The captured data is processed for various purposes illustrated with three case studies: the first one is as-built BIM for a historic building based on registered point clouds according to Ground Control Points; The second one concerns structural analysis for a damaged bridge using Finite Element Analysis software; The last one relates to parametric automated feature extraction from captured point clouds for reverse modeling and fabrication.

  18. Objectively-Measured Physical Activity and Cognitive Functioning in Breast Cancer Survivors

    PubMed Central

    Marinac, Catherine R.; Godbole, Suneeta; Kerr, Jacqueline; Natarajan, Loki; Patterson, Ruth E.; Hartman, Sheri J.

    2015-01-01

    Purpose To explore the relationship between objectively measured physical activity and cognitive functioning in breast cancer survivors. Methods Participants were 136 postmenopausal breast cancer survivors. Cognitive functioning was assessed using a comprehensive computerized neuropsychological test. 7-day physical activity was assessed using hip-worn accelerometers. Linear regression models examined associations of minutes per day of physical activity at various intensities on individual cognitive functioning domains. The partially adjusted model controlled for primary confounders (model 1), and subsequent adjustments were made for chemotherapy history (model 2), and BMI (model 3). Interaction and stratified models examined BMI as an effect modifier. Results Moderate-to-vigorous physical activity (MVPA) was associated with Information Processing Speed. Specifically, ten minutes of MVPA was associated with a 1.35-point higher score (out of 100) on the Information Processing Speed domain in the partially adjusted model, and a 1.29-point higher score when chemotherapy was added to the model (both p<.05). There was a significant BMI x MVPA interaction (p=.051). In models stratified by BMI (<25 vs. ≥25 kg/m2), the favorable association between MVPA and Information Processing Speed was stronger in the subsample of overweight and obese women (p<.05), but not statistically significant in the leaner subsample. Light-intensity physical activity was not significantly associated with any of the measured domains of cognitive function. Conclusions MVPA may have favorable effects on Information Processing Speed in breast cancer survivors, particularly among overweight or obese women. Implications for Cancer Survivors Interventions targeting increased physical activity may enhance aspects of cognitive function among breast cancer survivors. PMID:25304986

  19. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  20. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  1. Knowledge-Production-and Utilization: A General Model. Third Approximation. Iowa Agricultural and Home Economics Experiment Station Project No. 2218. Sociology Report No. 138.

    ERIC Educational Resources Information Center

    Meehan, Peter M.; Beal, George M.

    The objective of this monograph is to contribute to the further understanding of the knowledge-production-and-utilization process. Its primary focus is on a model both general and detailed enough to provide a comprehensive overview of the diverse functions, roles, and processes required to understand the flow of knowledge from its point of origin…

  2. Progress of Aircraft System Noise Assessment with Uncertainty Quantification for the Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.

  3. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  4. Assessing Argumentative Representation with Bayesian Network Models in Debatable Social Issues

    ERIC Educational Resources Information Center

    Zhang, Zhidong; Lu, Jingyan

    2014-01-01

    This study seeks to obtain argumentation models, which represent argumentative processes and an assessment structure in secondary school debatable issues in the social sciences. The argumentation model was developed based on mixed methods, a combination of both theory-driven and data-driven methods. The coding system provided a combing point by…

  5. Statistical Methods for Identifying Sequence Motifs Affecting Point Mutations

    PubMed Central

    Zhu, Yicheng; Neeman, Teresa; Yap, Von Bing; Huttley, Gavin A.

    2017-01-01

    Mutation processes differ between types of point mutation, genomic locations, cells, and biological species. For some point mutations, specific neighboring bases are known to be mechanistically influential. Beyond these cases, numerous questions remain unresolved, including: what are the sequence motifs that affect point mutations? How large are the motifs? Are they strand symmetric? And, do they vary between samples? We present new log-linear models that allow explicit examination of these questions, along with sequence logo style visualization to enable identifying specific motifs. We demonstrate the performance of these methods by analyzing mutation processes in human germline and malignant melanoma. We recapitulate the known CpG effect, and identify novel motifs, including a highly significant motif associated with A→G mutations. We show that major effects of neighbors on germline mutation lie within ±2 of the mutating base. Models are also presented for contrasting the entire mutation spectra (the distribution of the different point mutations). We show the spectra vary significantly between autosomes and X-chromosome, with a difference in T→C transition dominating. Analyses of malignant melanoma confirmed reported characteristic features of this cancer, including statistically significant strand asymmetry, and markedly different neighboring influences. The methods we present are made freely available as a Python library https://bitbucket.org/pycogent3/mutationmotif. PMID:27974498

  6. Optimal Management of the Critically Ill: Anaesthesia, Monitoring, Data Capture, and Point-of-Care Technological Practices in Ovine Models of Critical Care

    PubMed Central

    Shekar, Kiran; Tung, John-Paul; Dunster, Kimble R.; Platts, David; Watts, Ryan P.; Gregory, Shaun D.; Simonova, Gabriela; McDonald, Charles; Hayes, Rylan; Bellpart, Judith; Timms, Daniel; Fung, Yoke L.; Toon, Michael; Maybauer, Marc O.; Fraser, John F.

    2014-01-01

    Animal models of critical illness are vital in biomedical research. They provide possibilities for the investigation of pathophysiological processes that may not otherwise be possible in humans. In order to be clinically applicable, the model should simulate the critical care situation realistically, including anaesthesia, monitoring, sampling, utilising appropriate personnel skill mix, and therapeutic interventions. There are limited data documenting the constitution of ideal technologically advanced large animal critical care practices and all the processes of the animal model. In this paper, we describe the procedure of animal preparation, anaesthesia induction and maintenance, physiologic monitoring, data capture, point-of-care technology, and animal aftercare that has been successfully used to study several novel ovine models of critical illness. The relevant investigations are on respiratory failure due to smoke inhalation, transfusion related acute lung injury, endotoxin-induced proteogenomic alterations, haemorrhagic shock, septic shock, brain death, cerebral microcirculation, and artificial heart studies. We have demonstrated the functionality of monitoring practices during anaesthesia required to provide a platform for undertaking systematic investigations in complex ovine models of critical illness. PMID:24783206

  7. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  8. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  9. Ground-state proton decay of 69Br and implications for the rp -process 68Se waiting-point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, Andrew M; Shapira, Dan; Lynch, William

    2011-01-01

    The first direct measurement of the proton separation energy, S p , for the proton-unbound nucleus 69Br is reported. Of interest is the exponential dependence of the 2 p-capture rate on S p which can bypass the 68Se waiting-point in the astrophysical rp process. An analysis of the observed proton decay spectrum is given in terms of the 69Se mirror nucleus and the influence of S p is explored within the context of a single-zone X-ray burst model.

  10. Change-point detection of induced and natural seismicity

    NASA Astrophysics Data System (ADS)

    Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.

    2016-12-01

    Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.

  11. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  12. Critical behavior of the contact process on small-world networks

    NASA Astrophysics Data System (ADS)

    Ferreira, Ronan S.; Ferreira, Silvio C.

    2013-11-01

    We investigate the role of clustering on the critical behavior of the contact process (CP) on small-world networks using the Watts-Strogatz (WS) network model with an edge rewiring probability p. The critical point is well predicted by a homogeneous cluster-approximation for the limit of vanishing clustering ( p → 1). The critical exponents and dimensionless moment ratios of the CP are in agreement with those predicted by the mean-field theory for any p > 0. This independence on the network clustering shows that the small-world property is a sufficient condition for the mean-field theory to correctly predict the universality of the model. Moreover, we compare the CP dynamics on WS networks with rewiring probability p = 1 and random regular networks and show that the weak heterogeneity of the WS network slightly changes the critical point but does not alter other critical quantities of the model.

  13. About one counterexample of applying method of splitting in modeling of plating processes

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.

    2018-05-01

    The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.

  14. Infinite-disorder critical points of models with stretched exponential interactions

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert

    2014-09-01

    We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.

  15. Message survival and decision dynamics in a class of reactive complex systems subject to external fields

    NASA Astrophysics Data System (ADS)

    Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.

    2014-07-01

    In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.

  16. Obesity Energetics: Body Weight Regulation and the Effects of Diet Composition

    PubMed Central

    Hall, Kevin D.; Guo, Juen

    2017-01-01

    Weight changes are accompanied by imbalances between calorie intake and expenditure. This fact is often misinterpreted to suggest that obesity is caused by gluttony and sloth and can be treated by simply advising people to eat less and move more. However, various components of energy balance are dynamically interrelated and weight loss is resisted by counterbalancing physiological processes. While low-carbohydrate diets have been suggested to partially subvert these processes by increasing energy expenditure and promoting fat loss, our meta-analysis of 32 controlled feeding studies with isocaloric substitution of carbohydrate for fat found that both energy expenditure (26 kcal/d; P <.0001) and fat loss (16 g/d; P <.0001) were greater with lower fat diets. We review the components of energy balance and the mechanisms acting to resist weight loss in the context of static, settling point, and set-point models of body weight regulation, with the set-point model being most commensurate with current data. PMID:28193517

  17. Transversal Fluctuations of the ASEP, Stochastic Six Vertex Model, and Hall-Littlewood Gibbsian Line Ensembles

    NASA Astrophysics Data System (ADS)

    Corwin, Ivan; Dimitrov, Evgeni

    2018-05-01

    We consider the ASEP and the stochastic six vertex model started with step initial data. After a long time, T, it is known that the one-point height function fluctuations for these systems are of order T 1/3. We prove the KPZ prediction of T 2/3 scaling in space. Namely, we prove tightness (and Brownian absolute continuity of all subsequential limits) as T goes to infinity of the height function with spatial coordinate scaled by T 2/3 and fluctuations scaled by T 1/3. The starting point for proving these results is a connection discovered recently by Borodin-Bufetov-Wheeler between the stochastic six vertex height function and the Hall-Littlewood process (a certain measure on plane partitions). Interpreting this process as a line ensemble with a Gibbsian resampling invariance, we show that the one-point tightness of the top curve can be propagated to the tightness of the entire curve.

  18. New Experiments and a Model-Driven Approach for Interpreting Middle Stone Age Lithic Point Function Using the Edge Damage Distribution Method.

    PubMed

    Schoville, Benjamin J; Brown, Kyle S; Harris, Jacob A; Wilkins, Jayne

    2016-01-01

    The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages-Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed.

  19. New Experiments and a Model-Driven Approach for Interpreting Middle Stone Age Lithic Point Function Using the Edge Damage Distribution Method

    PubMed Central

    Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne

    2016-01-01

    The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886

  20. Yield strength mapping in the cross section of ERW pipes considering kinematic hardening and residual stress

    NASA Astrophysics Data System (ADS)

    Kim, Dongwook; Quagliato, Luca; Lee, Wontaek; Kim, Naksoo

    2017-09-01

    In the ERW (electric resistance welding) pipe manufacturing, material properties, process conditions and settings strongly influences the mechanical performances of the final product, as well as they can make them to be not uniform and to change from point to point in the pipe. The present research work proposes an integrated numerical model for the study of the whole ERW process, considering roll forming, welding and sizing stations, allowing to infer the influence of the process parameters on the final quality of the pipe, in terms of final shape and residual stress. The developed numerical model has been initially validated comparing the dimensions of the pipe derived from the simulation results with those of industrial production, proving the reliability of the approach. Afterwards, by varying the process parameters in the numerical simulation, namely the roll speed, the sizing ratio and the friction factor, the influence on the residual stress in the pipe, at the end of the process and after each station, is studied and discussed along the paper.

  1. A study on using pre-forming blank in single point incremental forming process by finite element analysis

    NASA Astrophysics Data System (ADS)

    Abass, K. I.

    2016-11-01

    Single Point Incremental Forming process (SPIF) is a forming technique of sheet material based on layered manufacturing principles. The edges of sheet material are clamped while the forming tool is moved along the tool path. The CNC milling machine is used to manufacturing the product. SPIF involves extensive plastic deformation and the description of the process is more complicated by highly nonlinear boundary conditions, namely contact and frictional effects have been accomplished. However, due to the complex nature of these models, numerical approaches dominated by Finite Element Analysis (FEA) are now in widespread use. The paper presents the data and main results of a study on effect of using preforming blank in SPIF through FEA. The considered SPIF has been studied under certain process conditions referring to the test work piece, tool, etc., applying ANSYS 11. The results show that the simulation model can predict an ideal profile of processing track, the behaviour of contact tool-workpiece, the product accuracy by evaluation its thickness, surface strain and the stress distribution along the deformed blank section during the deformation stages.

  2. Analysis of thermal processing of table olives using computational fluid dynamics.

    PubMed

    Dimou, A; Panagou, E; Stoforos, N G; Yanniotis, S

    2013-11-01

    In the present work, the thermal processing of table olives in brine in a stationary metal can was studied through computational fluid dynamics (CFD). The flow patterns of the brine and the temperature evolution in the olives and brine during the heating and the cooling cycles of the process were calculated using the CFD code. Experimental temperature measurements at 3 points (2 inside model olive particles and 1 at a point in the brine) in a can (with dimensions of 75 mm × 105 mm) filled with 48 olives in 4% (w/v) brine, initially held at 20 °C, heated in water at 100 °C for 10 min, and thereafter cooled in water at about 20 °C for 10 min, validated model predictions. The distribution of temperature and F-values and the location of the slowest heating zone and the critical point within the product, as far as microbial destruction is concerned, were assessed for several cases. For the cases studied, the critical point was located at the interior of the olives at the 2nd, or between the 1st and the 2nd olive row from the bottom of the container, the exact location being affected by olive size, olive arrangement, and geometry of the container. © 2013 Institute of Food Technologists®

  3. Organic and nitrogen removal from landfill leachate in aerobic granular sludge sequencing batch reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei Yanjie; Key Laboratory of Environmental Protection in Water Transport Engineering Ministry of Communications, Tianjin Research Institute of Water Transport Engineering, Tianjin 300456; Ji Min, E-mail: jmtju@yahoo.cn

    2012-03-15

    Highlights: Black-Right-Pointing-Pointer Aerobic granular sludge SBR was used to treat real landfill leachate. Black-Right-Pointing-Pointer COD removal was analyzed kinetically using a modified model. Black-Right-Pointing-Pointer Characteristics of nitrogen removal at different ammonium inputs were explored. Black-Right-Pointing-Pointer DO variations were consistent with the GSBR performances at low ammonium inputs. - Abstract: Granule sequencing batch reactors (GSBR) were established for landfill leachate treatment, and the COD removal was analyzed kinetically using a modified model. Results showed that COD removal rate decreased as influent ammonium concentration increasing. Characteristics of nitrogen removal at different influent ammonium levels were also studied. When the ammonium concentration inmore » the landfill leachate was 366 mg L{sup -1}, the dominant nitrogen removal process in the GSBR was simultaneous nitrification and denitrification (SND). Under the ammonium concentration of 788 mg L{sup -1}, nitrite accumulation occurred and the accumulated nitrite was reduced to nitrogen gas by the shortcut denitrification process. When the influent ammonium increased to a higher level of 1105 mg L{sup -1}, accumulation of nitrite and nitrate lasted in the whole cycle, and the removal efficiencies of total nitrogen and ammonium decreased to only 35.0% and 39.3%, respectively. Results also showed that DO was a useful process controlling parameter for the organics and nitrogen removal at low ammonium input.« less

  4. Effect of processing conditions on oil point pressure of moringa oleifera seed.

    PubMed

    Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A

    2015-07-01

    Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.

  5. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  6. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  7. Modelling the crystallization of the globular proteins

    NASA Astrophysics Data System (ADS)

    Shiryayev, Andrey S.

    Crystallization of globular proteins has become a very important subject in recent yearn. However there is still no understanding of the particular conditions that lead to the crystallization. Since nucleation of a crystalline droplet is the critical step toward the formation of the solid phase from the supersaturated solution, this is the focus of current studies. In this work we use different approaches to investigate the collective behavior of a system of globular proteins. Especially we focused on the models which have a metastable critical point, because this reflects the properties of solutions of globular proteins. The first approach is a continuum model of globular proteins. This model was first presented by Talanquer and Oxtoby and is based on the van der Waals theory. The model can have either a stable or a metastable critical point. For the system with the metastable critical point we studied the behavior of the free energy barrier to nucleation; we found that along particular pathways the barrier to nucleation has a minimim around the critical point. As well, the number of molecules in the critical cluster was found to diverge as one approaches the critical point, though most of the molecules are in the fluid tail of the droplet. Our results are an extension of earlier work [17, 7]. The properties of the solvent affect the behavior of the solution. In our second approach, we proposed a model that takes into account the contribution of the solvent free energy to the free energy of the globular proteins. We show that one can map the phase diagram of a repulsive hard core plus attractive square well interacting system to the same system particles in the solvent environment. In particular we show that this leads to phase diagrams with upper critical points, lower critical points and even closed loops with both upper and lower critical points, similar to the one found before [10]. For systems with interaction different from the square well, in the presence of the solvent this mapping procedure can be a first approximation to understand the phase diagram. The final part of this work is dedicated to the behavior of sickle hemoglobin. While the fluid behavior of the HbS molecules can be approximately explained by the uniform interparticle potential, this model fails to describe the polymerization process and the particular structure of fibers. We develop an anisotropic "patchy" model to describe some features of the HbS polymerization process. To determine the degree of polymerization of the system a "patchy" order parameter was defined. Monte Carlo simulations for the simple two-patch model was performed and reveal the possibility of obtaining chains that can be considered as one dimensional crystals.

  8. Facilitating Authentic Becoming

    ERIC Educational Resources Information Center

    Eriksen, Matthew

    2012-01-01

    A "Model of Authentic Becoming" that conceptualizes learning as a continuous and ongoing embodied and relational process, and uses social constructionism assumptions as well as Kolb's experiential learning model as its point of departure, is presented. Through a focus on the subjective, embodied, and relational nature of organizational life, the…

  9. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  10. Single- and Dual-Process Models of Biased Contingency Detection.

    PubMed

    Vadillo, Miguel A; Blanco, Fernando; Yarritu, Ion; Matute, Helena

    2016-01-01

    Decades of research in causal and contingency learning show that people's estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.

  11. Numerical analysis of stress effects on Frank loop evolution during irradiation in austenitic Fe&z.sbnd;Cr&z.sbnd;Ni alloy

    NASA Astrophysics Data System (ADS)

    Tanigawa, Hiroyasu; Katoh, Yutai; Kohyama, Akira

    1995-08-01

    Effects of applied stress on early stages of interstitial type Frank loop evolution were investigated by both numerical calculation and irradiation experiments. The final objective of this research is to propose a comprehensive model of complex stress effects on microstructural evolution under various conditions. In the experimental part of this work, the microstructural analysis revealed that the differences in resolved normal stress caused those in the nucleation rates of Frank loops on {111} crystallographic family planes, and that with increasing external applied stress the total nucleation rate of Frank loops was increased. A numerical calculation was carried out primarily to evaluate the validity of models of stress effects on nucleation processes of Frank loop evolution. The calculation stands on rate equuations which describe evolution of point defects, small points defect clusters and Frank loops. The rate equations of Frank loop evolution were formulated for {111} planes, considering effects of resolved normal stress to clustering processes of small point defects and growth processes of Frank loops, separately. The experimental results and the predictions from the numerical calculation qualitatively coincided well with each other.

  12. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  13. Optimization of thermal processing of canned mussels.

    PubMed

    Ansorena, M R; Salvadori, V O

    2011-10-01

    The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.

  14. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

    NASA Astrophysics Data System (ADS)

    Howard, Roy

    2017-10-01

    An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

  15. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  16. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  17. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  18. Automated Classification of Heritage Buildings for As-Built Bim Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Vergauwen, M.; Van Genechten, B.

    2017-08-01

    Semantically rich three dimensional models such as Building Information Models (BIMs) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings life cyle which is crucial in the conservation process. The creation of as-built BIM models is based on point cloud data. However, manually interpreting this data is labour intensive and often leads to misinterpretations. By automatically classifying the point cloud, the information can be proccesed more effeciently. A key aspect in this automated scan-to-BIM process is the classification of building objects. In this research we look to automatically recognise elements in existing buildings to create compact semantic information models. Our algorithm efficiently extracts the main structural components such as floors, ceilings, roofs, walls and beams despite the presence of significant clutter and occlusions. More specifically, Support Vector Machines (SVM) are proposed for the classification. The algorithm is evaluated using real data of a variety of existing buildings. The results prove that the used classifier recognizes the objects with both high precision and recall. As a result, entire data sets are reliably labelled at once. The approach enables experts to better document and process heritage assets.

  19. A first packet processing subdomain cluster model based on SDN

    NASA Astrophysics Data System (ADS)

    Chen, Mingyong; Wu, Weimin

    2017-08-01

    For the current controller cluster packet processing performance bottlenecks and controller downtime problems. An SDN controller is proposed to allocate the priority of each device in the SDN (Software Defined Network) network, and the domain contains several network devices and Controller, the controller is responsible for managing the network equipment within the domain, the switch performs data delivery based on the load of the controller, processing network equipment data. The experimental results show that the model can effectively solve the risk of single point failure of the controller, and can solve the performance bottleneck of the first packet processing.

  20. Multilevel principal component analysis (mPCA) in shape analysis: A feasibility study in medical and dental imaging.

    PubMed

    Farnell, D J J; Popat, H; Richmond, S

    2016-06-01

    Methods used in image processing should reflect any multilevel structures inherent in the image dataset or they run the risk of functioning inadequately. We wish to test the feasibility of multilevel principal components analysis (PCA) to build active shape models (ASMs) for cases relevant to medical and dental imaging. Multilevel PCA was used to carry out model fitting to sets of landmark points and it was compared to the results of "standard" (single-level) PCA. Proof of principle was tested by applying mPCA to model basic peri-oral expressions (happy, neutral, sad) approximated to the junction between the mouth/lips. Monte Carlo simulations were used to create this data which allowed exploration of practical implementation issues such as the number of landmark points, number of images, and number of groups (i.e., "expressions" for this example). To further test the robustness of the method, mPCA was subsequently applied to a dental imaging dataset utilising landmark points (placed by different clinicians) along the boundary of mandibular cortical bone in panoramic radiographs of the face. Changes of expression that varied between groups were modelled correctly at one level of the model and changes in lip width that varied within groups at another for the Monte Carlo dataset. Extreme cases in the test dataset were modelled adequately by mPCA but not by standard PCA. Similarly, variations in the shape of the cortical bone were modelled by one level of mPCA and variations between the experts at another for the panoramic radiographs dataset. Results for mPCA were found to be comparable to those of standard PCA for point-to-point errors via miss-one-out testing for this dataset. These errors reduce with increasing number of eigenvectors/values retained, as expected. We have shown that mPCA can be used in shape models for dental and medical image processing. mPCA was found to provide more control and flexibility when compared to standard "single-level" PCA. Specifically, mPCA is preferable to "standard" PCA when multiple levels occur naturally in the dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  2. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors.

    PubMed

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-08-31

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.

  3. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors

    PubMed Central

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-01-01

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287

  4. Fragmentation approach to the point-island model with hindered aggregation: Accessing the barrier energy

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2017-07-01

    We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.

  5. Adhesion of leukocytes under oscillating stagnation point conditions: a numerical study.

    PubMed

    Walker, P G; Alshorman, A A; Westwood, S; David, T

    2002-01-01

    Leukocyte recruitment from blood to the endothelium plays an important role in atherosclerotic plaque formation. Cells show a primary and secondary adhesive process with primary bonds responsible for capture and rolling and secondary bonds for arrest. Our objective was to investigate the role played by this process on the adhesion of leukocytes in complex flow. Cells were modelled as rigid spheres with spring like adhesion molecules which formed bonds with endothelial receptors. Models of bond kinetics and Newton's laws of motion were solved numerically to determine cell motion. Fluid force was obtained from the local shear rate obtained from a CFD simulation of the flow over a backward facing step.In stagnation point flow the shear rate near the stagnation point has a large gradient such that adherent cells in this region roll to a high shear region preventing permanent adhesion. This is enhanced if a small time dependent perturbation is imposed upon the stagnation point. For lower shear rates the cell rolling velocity may be such that secondary bonds have time to form. These bonds resist the lower fluid forces and consequently there is a relatively large permanent adhesion region.

  6. Efficacy of a Process Improvement Intervention on Delivery of HIV Services to Offenders: A Multisite Trial

    PubMed Central

    Shafer, Michael S.; Dembo, Richard; del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L.; Belenko, Steven; Frisman, Linda K.; Visher, Christy A.; Pich, Michele; Patterson, Yvonne

    2014-01-01

    Objectives. We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. Methods. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. Results. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Conclusions. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments. PMID:25322311

  7. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  8. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  9. Efficacy of a process improvement intervention on delivery of HIV services to offenders: a multisite trial.

    PubMed

    Pearson, Frank S; Shafer, Michael S; Dembo, Richard; Del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L; Belenko, Steven; Frisman, Linda K; Visher, Christy A; Pich, Michele; Patterson, Yvonne

    2014-12-01

    We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments.

  10. Advanced model for the prediction of the neutron-rich fission product yields

    NASA Astrophysics Data System (ADS)

    Rubchenya, V. A.; Gorelov, D.; Jokinen, A.; Penttilä, H.; Äystö, J.

    2013-12-01

    The consistent models for the description of the independent fission product formation cross sections in the spontaneous fission and in the neutron and proton induced fission at the energies up to 100 MeV is developed. This model is a combination of new version of the two-component exciton model and a time-dependent statistical model for fusion-fission process with inclusion of dynamical effects for accurate calculations of nucleon composition and excitation energy of the fissioning nucleus at the scission point. For each member of the compound nucleus ensemble at the scission point, the primary fission fragment characteristics: kinetic and excitation energies and their yields are calculated using the scission-point fission model with inclusion of the nuclear shell and pairing effects, and multimodal approach. The charge distribution of the primary fragment isobaric chains was considered as a result of the frozen quantal fluctuations of the isovector nuclear matter density at the scission point with the finite neck radius. Model parameters were obtained from the comparison of the predicted independent product fission yields with the experimental results and with the neutron-rich fission product data measured with a Penning trap at the Accelerator Laboratory of the University of Jyväskylä (JYFLTRAP).

  11. Quantum Critical Higgs

    NASA Astrophysics Data System (ADS)

    Bellazzini, Brando; Csáki, Csaba; Hubisz, Jay; Lee, Seung J.; Serra, Javi; Terning, John

    2016-10-01

    The appearance of the light Higgs boson at the LHC is difficult to explain, particularly in light of naturalness arguments in quantum field theory. However, light scalars can appear in condensed matter systems when parameters (like the amount of doping) are tuned to a critical point. At zero temperature these quantum critical points are directly analogous to the finely tuned standard model. In this paper, we explore a class of models with a Higgs near a quantum critical point that exhibits non-mean-field behavior. We discuss the parametrization of the effects of a Higgs emerging from such a critical point in terms of form factors, and present two simple realistic scenarios based on either generalized free fields or a 5D dual in anti-de Sitter space. For both of these models, we consider the processes g g →Z Z and g g →h h , which can be used to gain information about the Higgs scaling dimension and IR transition scale from the experimental data.

  12. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  13. Pedagogical Technology of Improving the Students' Viability Levels in the Process of Mastering Foreign Language

    ERIC Educational Resources Information Center

    Dmitrienko, Nadezhda; Ershova, Svetlana; Konovalenko, Tatiana; Kutsova, Elvira; Yurina, Elena

    2015-01-01

    The article points out that the process of mastering foreign language stimulates students' personal, professional and cultural growth, improving linguistic, communicative competences and viability levels. A proposed pedagogical technology of modeling different communicative situations has a serious synergetic potential for students' self organized…

  14. In Search of a Unified Model of Language Contact

    ERIC Educational Resources Information Center

    Winford, Donald

    2013-01-01

    Much previous research has pointed to the need for a unified framework for language contact phenomena -- one that would include social factors and motivations, structural factors and linguistic constraints, and psycholinguistic factors involved in processes of language processing and production. While Contact Linguistics has devoted a great deal…

  15. Trade-off analysis of modes of data handling for earth resources (ERS), volume 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Data handling requirements are reviewed for earth observation missions along with likely technology advances. Parametric techniques for synthesizing potential systems are developed. Major tasks include: (1) review of the sensors under development and extensions of or improvements in these sensors; (2) development of mission models for missions spanning land, ocean, and atmosphere observations; (3) summary of data handling requirements including the frequency of coverage, timeliness of dissemination, and geographic relationships between points of collection and points of dissemination; (4) review of data routing to establish ways of getting data from the collection point to the user; (5) on-board data processing; (6) communications link; and (7) ground data processing. A detailed synthesis of three specific missions is included.

  16. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  17. Formal Process Modeling to Improve Human Decision-Making in Test and Evaluation Acoustic Range Control

    DTIC Science & Technology

    2017-09-01

    AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Test and...ambiguities and identify high -value decision points? This thesis explores how formalization of these experience-based decisions as a process model...representing a T&E event may reveal high -value decision nodes where certain decisions carry more weight or potential for impacts to a successful test. The

  18. Matrix Determination of Reflectance of Hidden Object via Indirect Photography

    DTIC Science & Technology

    2012-03-01

    the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind

  19. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  20. Wavelength dependence in radio-wave scattering and specular-point theory

    NASA Technical Reports Server (NTRS)

    Tyler, G. L.

    1976-01-01

    Radio-wave scattering from natural surfaces contains a strong quasispecular component that at fixed wavelengths is consistent with specular-point theory, but often has a strong wavelength dependence that is not predicted by physical optics calculations under the usual limitations of specular-point models. Wavelength dependence can be introduced by a physical approximation that preserves the specular-point assumptions with respect to the radii of curvature of a fictitious, effective scattering surface obtained by smoothing the actual surface. A uniform low-pass filter model of the scattering process yields explicit results for the effective surface roughness versus wavelength. Interpretation of experimental results from planetary surfaces indicates that the asymptotic surface height spectral densities fall at least as fast as an inverse cube of spatial frequency. Asymptotic spectral densities for Mars and portions of the lunar surface evidently decrease more rapidly.

  1. Leaving home: how older adults prepare for intensive volunteering.

    PubMed

    Cheek, Cheryl; Piercy, Kathleen W; Grainger, Sarah

    2015-03-01

    Using the concepts in the Fogg Behavioral Model, 37 volunteers aged 50 and older described their preparation for intensive volunteering with faith-based organizations. Their multistage preparation process included decision points where respondents needed to choose whether to drop out or continue preparation. Ability was a stronger determinant of serving than motivation, particularly in terms of health and finances. This model can facilitate understanding of the barriers to volunteering and aid organizations in tailoring support at crucial points for potential older volunteers in intensive service. © The Author(s) 2013.

  2. Exploring a potential energy surface by machine learning for characterizing atomic transport

    NASA Astrophysics Data System (ADS)

    Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro

    2018-03-01

    We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.

  3. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  4. Spectrum of classes of point emitters of electromagnetic wave fields.

    PubMed

    Castañeda, Román

    2016-09-01

    The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.

  5. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011. In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.

  6. Safety Analysis of Soybean Processing for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Hentges, Dawn L.

    1999-01-01

    Soybeans (cv. Hoyt) is one of the crops planned for food production within the Advanced Life Support System Integration Testbed (ALSSIT), a proposed habitat simulation for long duration lunar/Mars missions. Soybeans may be processed into a variety of food products, including soymilk, tofu, and tempeh. Due to the closed environmental system and importance of crew health maintenance, food safety is a primary concern on long duration space missions. Identification of the food safety hazards and critical control points associated with the closed ALSSIT system is essential for the development of safe food processing techniques and equipment. A Hazard Analysis Critical Control Point (HACCP) model was developed to reflect proposed production and processing protocols for ALSSIT soybeans. Soybean processing was placed in the type III risk category. During the processing of ALSSIT-grown soybeans, critical control points were identified to control microbiological hazards, particularly mycotoxins, and chemical hazards from antinutrients. Critical limits were suggested at each CCP. Food safety recommendations regarding the hazards and risks associated with growing, harvesting, and processing soybeans; biomass management; and use of multifunctional equipment were made in consideration of the limitations and restraints of the closed ALSSIT.

  7. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  8. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele, Fatuzzo; Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is amore » set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.« less

  10. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    PubMed

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  11. Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Dorman, J. L.

    1987-01-01

    The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.

  12. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  13. Open Pit Mine 3d Mapping by Tls and Digital Photogrammetry: 3d Model Update Thanks to a Slam Based Approach

    NASA Astrophysics Data System (ADS)

    Vassena, G.; Clerici, A.

    2018-05-01

    The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.

  14. Specifications of a Simulation Model for a Local Area Network Design in Support of a Stock Point Logistics Integrated Communication Environment (SPLICE).

    DTIC Science & Technology

    1983-06-01

    constrained at each step. Use of dis- crete simulation can be a powerful tool in this process if its role is carefully planned. The gross behavior of the...by projecting: - the arrival of units of work at SPLICE processing facilities (workload analysis) . - the amount of processing resources comsumed in

  15. Real time SAR processing

    NASA Technical Reports Server (NTRS)

    Premkumar, A. B.; Purviance, J. E.

    1990-01-01

    A simplified model for the SAR imaging problem is presented. The model is based on the geometry of the SAR system. Using this model an expression for the entire phase history of the received SAR signal is formulated. From the phase history, it is shown that the range and the azimuth coordinates for a point target image can be obtained by processing the phase information during the intrapulse and interpulse periods respectively. An architecture for a VLSI implementation for the SAR signal processor is presented which generates images in real time. The architecture uses a small number of chips, a new correlation processor, and an efficient azimuth correlation process.

  16. Jupiter Europa Orbiter Architecture Definition Process

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Shishko, Robert

    2011-01-01

    The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.

  17. Diffusion of Web Supported Instruction in Higher Education--The Case of Tel-Aviv University

    ERIC Educational Resources Information Center

    Soffer, Tal; Nachmias, Rafi; Ram, Judith

    2010-01-01

    This paper describes a study that focused on long-term web-supported learning diffusion among lecturers at Tel Aviv University (TAU), from an organizational point of view. The theoretical models we used to examine this process are Rogers' model for "Diffusion of Innovation" (1995) and Bass's "Diffusion Model" (1969). The study…

  18. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.

  19. Optimization of phase feeding of starter, grower, and finisher diets for male broilers by mixture experimental design: forty-eight-day production period.

    PubMed

    Roush, W B; Boykin, D; Branton, S L

    2004-08-01

    A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.

  20. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Phase-plane analysis of the totally asymmetric simple exclusion process with binding kinetics and switching between antiparallel lanes

    PubMed Central

    Kuan, Hui-Shun; Betterton, Meredith D.

    2016-01-01

    Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345

  2. Complexity and multifractal behaviors of multiscale-continuum percolation financial system for Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Zeng, Yayun; Wang, Jun; Xu, Kaixuan

    2017-04-01

    A new financial agent-based time series model is developed and investigated by multiscale-continuum percolation system, which can be viewed as an extended version of continuum percolation system. In this financial model, for different parameters of proportion and density, two Poisson point processes (where the radii of points represent the ability of receiving or transmitting information among investors) are applied to model a random stock price process, in an attempt to investigate the fluctuation dynamics of the financial market. To validate its effectiveness and rationality, we compare the statistical behaviors and the multifractal behaviors of the simulated data derived from the proposed model with those of the real stock markets. Further, the multiscale sample entropy analysis is employed to study the complexity of the returns, and the cross-sample entropy analysis is applied to measure the degree of asynchrony of return autocorrelation time series. The empirical results indicate that the proposed financial model can simulate and reproduce some significant characteristics of the real stock markets to a certain extent.

  3. Assessment of variability in the hydrological cycle of the Loess Plateau, China: examining dependence structures of hydrological processes

    NASA Astrophysics Data System (ADS)

    Guo, A.; Wang, Y.

    2017-12-01

    Investigating variability in dependence structures of hydrological processes is of critical importance for developing an understanding of mechanisms of hydrological cycles in changing environments. In focusing on this topic, present work involves the following: (1) identifying and eliminating serial correlation and conditional heteroscedasticity in monthly streamflow (Q), precipitation (P) and potential evapotranspiration (PE) series using the ARMA-GARCH model (ARMA: autoregressive moving average; GARCH: generalized autoregressive conditional heteroscedasticity); (2) describing dependence structures of hydrological processes using partial copula coupled with the ARMA-GARCH model and identifying their variability via copula-based likelihood-ratio test method; and (3) determining conditional probability of annual Q under different climate scenarios on account of above results. This framework enables us to depict hydrological variables in the presence of conditional heteroscedasticity and to examine dependence structures of hydrological processes while excluding the influence of covariates by using partial copula-based ARMA-GARCH model. Eight major catchments across the Loess Plateau (LP) are used as study regions. Results indicate that (1) The occurrence of change points in dependence structures of Q and P (PE) varies across the LP. Change points of P-PE dependence structures in all regions almost fully correspond to the initiation of global warming, i.e., the early 1980s. (3) Conditional probabilities of annual Q under various P and PE scenarios are estimated from the 3-dimensional joint distribution of (Q, P and PE) based on the above change points. These findings shed light on mechanisms of the hydrological cycle and can guide water supply planning and management, particularly in changing environments.

  4. Interpersonal Emotion Regulation Model of Mood and Anxiety Disorders.

    PubMed

    Hofmann, Stefan G

    2014-10-01

    Although social factors are of critical importance in the development and maintenance of emotional disorders, the contemporary view of emotion regulation has been primarily limited to intrapersonal processes. Based on diverse perspectives pointing to the communicative function of emotions, the social processes in self-regulation, and the role of social support, this article presents an interpersonal model of emotion regulation of mood and anxiety disorders. This model provides a theoretical framework to understand and explain how mood and anxiety disorders are regulated and maintained through others. The literature, which provides support for the model, is reviewed and the clinical implications are discussed.

  5. Path-preference cellular-automaton model for traffic flow through transit points and its application to the transcription process in human cells.

    PubMed

    Ohta, Yoshihiro; Nishiyama, Akinobu; Wada, Yoichiro; Ruan, Yijun; Kodama, Tatsuhiko; Tsuboi, Takashi; Tokihiro, Tetsuji; Ihara, Sigeo

    2012-08-01

    We all use path routing everyday as we take shortcuts to avoid traffic jams, or by using faster traffic means. Previous models of traffic flow of RNA polymerase II (RNAPII) during transcription, however, were restricted to one dimension along the DNA template. Here we report the modeling and application of traffic flow in transcription that allows preferential paths of different dimensions only restricted to visit some transit points, as previously introduced between the 5' and 3' end of the gene. According to its position, an RNAPII protein molecule prefers paths obeying two types of time-evolution rules. One is an asymmetric simple exclusion process (ASEP) along DNA, and the other is a three-dimensional jump between transit points in DNA where RNAPIIs are staying. Simulations based on our model, and comparison experimental results, reveal how RNAPII molecules are distributed at the DNA-loop-formation-related protein binding sites as well as CTCF insulator proteins (or exons). As time passes after the stimulation, the RNAPII density at these sites becomes higher. Apparent far-distance jumps in one dimension are realized by short-range three-dimensional jumps between DNA loops. We confirm the above conjecture by applying our model calculation to the SAMD4A gene by comparing the experimental results. Our probabilistic model provides possible scenarios for assembling RNAPII molecules into transcription factories, where RNAPII and related proteins cooperatively transcribe DNA.

  6. Critical Issues and Key Points from the Survey to the Creation of the Historical Building Information Model: the Case of Santo Stefano Basilica

    NASA Astrophysics Data System (ADS)

    Castagnetti, C.; Dubbini, M.; Ricci, P. C.; Rivola, R.; Giannini, M.; Capra, A.

    2017-05-01

    The new era of designing in architecture and civil engineering applications lies in the Building Information Modeling (BIM) approach, based on a 3D geometric model including a 3D database. This is easier for new constructions whereas, when dealing with existing buildings, the creation of the BIM is based on the accurate knowledge of the as-built construction. Such a condition is allowed by a 3D survey, often carried out with laser scanning technology or modern photogrammetry, which are able to guarantee an adequate points cloud in terms of resolution and completeness by balancing both time consuming and costs with respect to the request of final accuracy. The BIM approach for existing buildings and even more for historical buildings is not yet a well known and deeply discussed process. There are still several choices to be addressed in the process from the survey to the model and critical issues to be discussed in the modeling step, particularly when dealing with unconventional elements such as deformed geometries or historical elements. The paper describes a comprehensive workflow that goes through the survey and the modeling, allowing to focus on critical issues and key points to obtain a reliable BIM of an existing monument. The case study employed to illustrate the workflow is the Basilica of St. Stefano in Bologna (Italy), a large monumental complex with great religious, historical and architectural assets.

  7. A model-based approach to wildland fire reconstruction using sediment charcoal records

    USGS Publications Warehouse

    Itter, Malcolm S.; Finley, Andrew O.; Hooten, Mevin B.; Higuera, Philip E.; Marlon, Jennifer R.; Kelly, Ryan; McLachlan, Jason S.

    2017-01-01

    Lake sediment charcoal records are used in paleoecological analyses to reconstruct fire history, including the identification of past wildland fires. One challenge of applying sediment charcoal records to infer fire history is the separation of charcoal associated with local fire occurrence and charcoal originating from regional fire activity. Despite a variety of methods to identify local fires from sediment charcoal records, an integrated statistical framework for fire reconstruction is lacking. We develop a Bayesian point process model to estimate the probability of fire associated with charcoal counts from individual-lake sediments and estimate mean fire return intervals. A multivariate extension of the model combines records from multiple lakes to reduce uncertainty in local fire identification and estimate a regional mean fire return interval. The univariate and multivariate models are applied to 13 lakes in the Yukon Flats region of Alaska. Both models resulted in similar mean fire return intervals (100–350 years) with reduced uncertainty under the multivariate model due to improved estimation of regional charcoal deposition. The point process model offers an integrated statistical framework for paleofire reconstruction and extends existing methods to infer regional fire history from multiple lake records with uncertainty following directly from posterior distributions.

  8. Current State of the Art Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2017-08-01

    In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.

  9. Determination of Steering Wheel Angles during CAR Alignment by Image Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Voegtle, T.

    2016-06-01

    Optical systems for automatic visual inspections are of increasing importance in the field of automation in the industrial domain. A new application is the determination of steering wheel angles during wheel track setting of the final inspection of car manufacturing. The camera has to be positioned outside the car to avoid interruptions of the processes and therefore, oblique images of the steering wheel must be acquired. Three different approaches of computer vision are considered in this paper, i.e. a 2D shape-based matching (by means of a plane to plane rectification of the oblique images and detection of a shape model with a particular rotation), a 3D shape-based matching approach (by means of a series of different perspectives of the spatial shape of the steering wheel derived from a CAD design model) and a point-to-point matching (by means of the extraction of significant elements (e.g. multifunctional buttons) of a steering wheel and a pairwise connection of these points to straight lines). The HALCON system (HALCON, 2016) was used for all software developments and necessary adaptions. As reference a mechanical balance with an accuracy of 0.1° was used. The quality assessment was based on two different approaches, a laboratory test and a test during production process. In the laboratory a standard deviation of ±0.035° (2D shape-based matching), ±0.12° (3D approach) and ±0.029° (point-to-point matching) could be obtained. The field test of 291 measurements (27 cars with varying poses and angles of the steering wheel) results in a detection rate of 100% and ±0.48° (2D matching) and ±0.24° (point-to-point matching). Both methods also fulfil the request of real time processing (three measurements per second).

  10. High Resolution Analysis of Dyke Tips and Segments, Using Drones

    NASA Astrophysics Data System (ADS)

    Dering, G.; Micklethwaite, S.; Cruden, A. R.

    2016-12-01

    We analyse outstanding exposures of dykes from both coastal (Western Australia) and high altitude glacier-polished (Sierra Nevada, California) outcrops, representing intrusion at shallow upper-crustal and mid-crustal conditions respectively. We covered 10,000 m^2 of outcrop area sampling the ground at a scale of 3-5 mm per pixel. Using Structure-from-Motion photogrammetry from ground-based and UAV photographs lacking GPS camera positions (>500 images per study), we generated and calibrated a 3D geometry of dense point clouds by selectively using 25-30 ground control points measured by high precision GPS (40-90 mm error). Ground control points used in the photogrammetric model building process typically yielded a root mean square error (RMSE) of 5 cm. Half the ground control points were withheld from the model building process and when they were compared against the model they yielded RMSE values only 6-10% higher than the points used for georeferencing, suggesting good internal consistency of the dataset and accuracy relative to the reference frame, at least for the purposes of this study. The structural orientations of the dykes and associated fractures were then extracted digitally using the iterative Random Sample Consensus method (RANSAC) and least-squares plane fitting. Furthermore, fracture intensity relative to dykes was measured along a series of scanlines and the running average and variance calculated. All results were compared against field measurements. Results show fracture intensity increases toward the dykes in the shallow crustal examples (West Australia) but no such fractures exist around the mid-crustal (Californian) dykes. Despite this there is a remarkable uniformity of geometry, and by implication process, between the two dyke sets. In order to extract full value from the big visual data now available to us, the near-future requires dedicated research into software solutions for expert-driven, semi-automatic mapping of geology and structure.

  11. A new in vivo animal model to create intervertebral disc degeneration characterized by MRI, radiography, CT/discogram, biochemistry, and histology.

    PubMed

    Zhou, HaoWei; Hou, ShuXun; Shang, WeiLin; Wu, WenWen; Cheng, Yao; Mei, Fang; Peng, BaoGan

    2007-04-15

    A new in vivo sheep model was developed that produced disc degeneration through the injection of 5-bromodeoxyuridine (BrdU) into the intervertebral disc. This process was studied using magnetic resonance imaging (MRI), radiography, CT/discogram, histology, and biochemistry. To develop a sheep model of intervertebral disc degeneration that more faithfully mimics the pathologic hallmarks of human intervertebral disc degeneration. Recent studies have shown age-related alterations in proteoglycan structure and organization in human intervertebral discs. An animal model that involves the use of age-related changes in disc cells can be beneficial over other more invasive degenerative models that involves directly damaging the matrix of disc tissue. Twelve sheep were injected with BrdU or vehicle (phosphate-buffered saline) into the central region of separate lumbar discs. Intact discs were used as controls. At the 2-, 6-, 10-, and 14-week time points, discs underwent MRI, radiography, histology, and biochemical analyses. A CT/discogram study was performed at the 14-week time point. MRI demonstrated a progressive loss of T2-weighted signal intensity at BrdU-injected discs over the 14-week study period. Radiograph findings included osteophyte and disc space narrowing formed by 10 weeks post-BrdU treatment. CT discography demonstrated internal disc disruption in several BrdU-treated discs at the 14-week time point. Histology showed a progressive loss of the normal architecture and cell density of discs from the 2-week time point to the 14-week time point. A progressive loss of cell proliferation capacity, water content, and proteoglycans was also documented. BrdU injection into the central region of sheep discs resulted in degeneration of intervertebral discs. This progressive, degenerative process was confirmed using MRI, histology, and by observing changes in biochemistry. Degeneration occurred in a manner that was similar to that observed in human disc degeneration.

  12. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2005-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  13. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2004-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  14. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  15. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  16. Can the History of Science Contribute to Modelling in Physics Teaching? The Case of Galilean Studies and Mario Bunge's Epistemology

    ERIC Educational Resources Information Center

    Machado, Juliana; Braga, Marco Antônio Barbosa

    2016-01-01

    A characterization of the modelling process in science is proposed for science education, based on Mario Bunge's ideas about the construction of models in science. Galileo's "Dialogues" are analysed as a potentially fruitful starting point to implement strategies aimed at modelling in the classroom in the light of that proposal. It is…

  17. Introducing Artificial Neural Networks through a Spreadsheet Model

    ERIC Educational Resources Information Center

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  18. Guiding and Modelling Quality Improvement in Higher Education Institutions

    ERIC Educational Resources Information Center

    Little, Daniel

    2015-01-01

    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  19. SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM

    EPA Science Inventory

    A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...

  20. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  1. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  2. The effect of response modality on immediate serial recall in dementia of the Alzheimer type.

    PubMed

    Macé, Anne-Laure; Ergis, Anne-Marie; Caza, Nicole

    2012-09-01

    Contrary to traditional models of verbal short-term memory (STM), psycholinguistic accounts assume that temporary retention of verbal materials is an intrinsic property of word processing. Therefore, memory performance will depend on the nature of the STM tasks, which vary according to the linguistic representations they engage. The aim of this study was to explore the effect of response modality on verbal STM performance in individuals with dementia of the Alzheimer Type (DAT), and its relationship with the patients' word-processing deficits. Twenty individuals with mild DAT and 20 controls were tested on an immediate serial recall (ISR) task using the same items across two response modalities (oral and picture pointing) and completed a detailed language assessment. When scoring of ISR performance was based on item memory regardless of item order, a response modality effect was found for all participants, indicating that they recalled more items with picture pointing than with oral response. However, this effect was less marked in patients than in controls, resulting in an interaction. Interestingly, when recall of both item and order was considered, results indicated similar performance between response modalities in controls, whereas performance was worse for pointing than for oral response in patients. Picture-naming performance was also reduced in patients relative to controls. However, in the word-to-picture matching task, a similar pattern of responses was found between groups for incorrectly named pictures of the same items. The finding of a response modality effect in item memory for all participants is compatible with the assumption that semantic influences are greater in picture pointing than in oral response, as predicted by psycholinguistic models. Furthermore, patients' performance was modulated by their word-processing deficits, showing a reduced advantage relative to controls. Overall, the response modality effect observed in this study for item memory suggests that verbal STM performance is intrinsically linked with word processing capacities in both healthy controls and individuals with mild DAT, supporting psycholinguistic models of STM.

  3. The Pseudomonas aeruginosa quorum sensing signal molecule N-(3-oxododecanoyl) homoserine lactone enhances keratinocyte migration and induces Mmp13 gene expression in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paes, Camila, E-mail: camilaquinetti@gmail.com; Nakagami, Gojiro, E-mail: gojiron-tky@umin.ac.jp; Minematsu, Takeo, E-mail: tminematsu-tky@umin.ac.jp

    2012-10-19

    Highlights: Black-Right-Pointing-Pointer An evidence of the positive effect of AHL on epithelialization process is provided. Black-Right-Pointing-Pointer AHL enhances keratinocyte's ability to migrate in an in vitro scratch wound model. Black-Right-Pointing-Pointer AHL induces the expression of Mmp13. Black-Right-Pointing-Pointer Topical application of AHL represents a possible strategy to treat chronic wounds. -- Abstract: Re-epithelialization is an essential step of wound healing involving three overlapping keratinocyte functions: migration, proliferation and differentiation. While quorum sensing (QS) is a cell density-dependent signaling system that enables bacteria to regulate the expression of certain genes, the QS molecule N-(3-oxododecanoyl) homoserine lactone (AHL) exerts effects also on mammalianmore » cells in a process called inter-kingdom signaling. Recent studies have shown that AHL improves epithelialization in in vivo wound healing models but detailed understanding of the molecular and cellular mechanisms are needed. The present study focused on the AHL as a candidate reagent to improve wound healing through direct modulation of keratinocyte's activity in the re-epithelialization process. Results indicated that AHL enhances the keratinocyte's ability to migrate in an in vitro scratch wound healing model probably due to the high Mmp13 gene expression analysis after AHL treatment that was revealed by real-time RT-PCR. Inhibition of activator protein 1 (AP-1) signaling pathway completely prevented the migration of keratinocytes, and also resulted in a diminished Mmp13 gene expression, suggesting that AP-1 might be essential in the AHL-induced migration. Taken together, these results imply that AHL is a promising candidate molecule to improve re-epithelialization through the induction of migration of keratinocytes. Further investigation is needed to clarify the mechanism of action and molecular pathway of AHL on the keratinocyte migration process.« less

  4. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information for each modeled voxel and interpolated vertices that can be a useful attributes for clustering during data treatment. We thus illustrate such applications to the Rochefort cave by using both sources of 3D information to quantify the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), cluster these structures using color information gathered from UAV's 3D point cloud and compare these data to structural data surveyed on the field. An additional drone photoscan was also conducted in the surface sinkhole giving access to the surveyed underground cavity to seek geological bodies' connections.

  5. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  6. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for numerically. With some assumptions to reduce the differential equations we were able to create a system of equations and unknowns to predict the behavior of the limit cycle for small B.

  7. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    NASA Astrophysics Data System (ADS)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  8. Wind and Wave Driven Nearshore Circulation at Cape Hatteras Point

    NASA Astrophysics Data System (ADS)

    Kumar, N.; Voulgaris, G.; Warner, J. C.; List, J. H.

    2012-12-01

    We have used a measurement and modeling approach to identify hydrodynamic processes responsible for alongshore transport of sediment that can support the maintenance of Diamond Shoals, NC, a large inner-shelf sedimentary convergent feature. As a part of Carolina Coastal Change Processes project, a one month field experiment was conducted around Cape Hatteras point during February, 2010. The instrumentation consisted of 15 acoustic current meters (measuring pressure and velocity profile) deployed in water depths varying from 3-10m and a very high frequency (VHF) beam forming radar system providing surface waves and currents with a resolution of 150 m and a spatial coverage of 10-15 km2. Analysis of field observation suggests that wind-driven circulation and littoral current dominate surf zone and inner shelf processes at least at an order higher than tidally rectified flows. However, the data analysis identified that relevant processes like non-linear advective acceleration, pressure gradient and vortex-force (due to interaction between wave-induced drift and mean flow vorticity), may be significant, but were not assessed accurately due to instrument location and accuracy. To obtain a deeper physical understanding of the hydrodynamics in this study-site, we applied a three-dimensional Coupled-Ocean-Atmosphere-Wave_Sediment-Transport (COAWST) numerical model. The COAWST modeling system is comprised of nested, coupled, three-dimensional ocean-circulation model (ROMS) and wave propagation model (SWAN), configured for the study site to simulate wave height, direction, period and mean current velocities (both Eulerian and Lagrangian). The nesting follows a two-way grid refinement process for the circulation module, and one-way for the wave model. The coarsest parent grid resolved processes on the spatial and temporal scales of mid-shelf to inner-shelf, and subsequent child grids evolved at inner-shelf and surf zone scales. Preliminary results show that the model successfully reproduces wind-driven circulation and littoral currents. Furthermore, model simulation provides evidence for (a) circulation pattern suggesting a mechanism for sediment movement from littoral zone to the Diamond Shoals complex; (b) Diamond shoals complex acting as independent coastline, which restricts the littoral currents to follow the coastline orientation around Cape Hatteras point. As a part of this study, simulated hydrodynamic parameters will be validated against field observations of wave height and direction and Eulerian velocities from acoustic current meters, and sea surface maps of wave height and Lagrangian flows provided by the VHF radar. Moreover, the model results will be analyzed to (a) identify the significance of the terms in momentum balance which are not estimated accurately through field observations; (b) provide a quasi-quantitative estimate of sediment transport contributing to shoal building process.

  9. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  10. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  11. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  12. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  13. Spatial land-use inventory, modeling, and projection/Denver metropolitan area, with inputs from existing maps, airphotos, and LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Tom, C.; Miller, L. D.; Christenson, J. W.

    1978-01-01

    A landscape model was constructed with 34 land-use, physiographic, socioeconomic, and transportation maps. A simple Markov land-use trend model was constructed from observed rates of change and nonchange from photointerpreted 1963 and 1970 airphotos. Seven multivariate land-use projection models predicting 1970 spatial land-use changes achieved accuracies from 42 to 57 percent. A final modeling strategy was designed, which combines both Markov trend and multivariate spatial projection processes. Landsat-1 image preprocessing included geometric rectification/resampling, spectral-band, and band/insolation ratioing operations. A new, systematic grid-sampled point training-set approach proved to be useful when tested on the four orginal MSS bands, ten image bands and ratios, and all 48 image and map variables (less land use). Ten variable accuracy was raised over 15 percentage points from 38.4 to 53.9 percent, with the use of the 31 ancillary variables. A land-use classification map was produced with an optimal ten-channel subset of four image bands and six ancillary map variables. Point-by-point verification of 331,776 points against a 1972/1973 U.S. Geological Survey (UGSG) land-use map prepared with airphotos and the same classification scheme showed average first-, second-, and third-order accuracies of 76.3, 58.4, and 33.0 percent, respectively.

  14. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  15. Ecological change points: The strength of density dependence and the loss of history.

    PubMed

    Ponciano, José M; Taper, Mark L; Dennis, Brian

    2018-05-01

    Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  17. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  18. Multiresponse modeling of variably saturated flow and isotope tracer transport for a hillslope experiment at the Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter

    2016-10-01

    This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.

  19. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    NASA Astrophysics Data System (ADS)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  1. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.

    PubMed

    Hauskrecht, M; Fraser, H

    1998-01-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.

  2. State Analysis: A Control Architecture View of Systems Engineering

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D.

    2005-01-01

    A viewgraph presentation on the state analysis process is shown. The topics include: 1) Issues with growing complexity; 2) Limits of common practice; 3) Exploiting a control point of view; 4) A glimpse at the State Analysis process; 5) Synergy with model-based systems engineering; and 6) Bridging the systems to software gap.

  3. The multiple resource inventory decision-making process

    Treesearch

    Victor A. Rudis

    1993-01-01

    A model of the multiple resource inventory decision-making process is presented that identifies steps in conducting inventories, describes the infrastructure, and points out knowledge gaps that are common to many interdisciplinary studies.Successful efforts to date suggest the need to bridge the gaps by sharing elements, maintain dialogue among stakeholders in multiple...

  4. Predicting seizures in untreated temporal lobe epilepsy using point-process nonlinear models of heartbeat dynamics.

    PubMed

    Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R

    2016-08-01

    Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.

  5. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  6. Tropical Oceanic Precipitation Processes over Warm Pool: 2D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.- K.; Johnson, D.

    1998-01-01

    Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere, The vertical distribution of convective latent-heat release modulates the large-scale circulations of the tropics, Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate models simulate cloud processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMS) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and cloud systems, The major objective of this paper is to investigate the latent heating, moisture and momenti,im budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (CCE) model which includes a 3-class ice-phase microphysical scheme, The model domain contains 256 x 256 grid points (using 2 km resolution) in the horizontal and 38 grid points (to a depth of 22 km depth) in the vertical, The 2D domain has 1024 grid points. The simulations are performed over a 7 day time period. We will examine (1) the precipitation processes (i.e., condensation/evaporation) and their interaction with warm pool; (2) the heating and moisture budgets in the convective and stratiform regions; (3) the cloud (upward-downward) mass fluxes in convective and stratiform regions; (4) characteristics of clouds (such as cloud size, updraft intensity and cloud lifetime) and the comparison of clouds with Radar observations. Differences and similarities in organization of convection between simulated 2D and 3D cloud systems. Preliminary results indicated that there is major differences between 2D and 3D simulated stratiform rainfall amount and convective updraft and downdraft mass fluxes.

  7. Lee as Critical Thinker: The Example of the Gettysburg Campaign

    DTIC Science & Technology

    2012-05-04

    well as what should have been done if the critical thinking process had been conducted appropriately. Conclusion: Several human and military...of reasoning that make up the cognitive decision making process .6 The critical thinking elements of the model (Clarify Concern, Point of View...Finally, there are three remaining biases, traps, and errors that can negatively affect the critical thinking process . A confirmation trap describes

  8. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  9. An interacting spin-flip model for one-dimensional proton conduction

    NASA Astrophysics Data System (ADS)

    Chou, Tom

    2002-05-01

    A discrete asymmetric exclusion process (ASEP) is developed to model proton conduction along one-dimensional water wires. Each lattice site represents a water molecule that can be in only one of three states; protonated, left-pointing and right-pointing. Only a right- (left-) pointing water can accept a proton from its left (). Results of asymptotic mean field analysis and Monte Carlo simulations for the three-species, open boundary exclusion model are presented and compared. The mean field results for the steady-state proton current suggest a number of regimes analogous to the low and maximal current phases found in the single-species ASEP (Derrida B 1998 Phys. Rep. 301 65-83). We find that the mean field results are accurate (compared with lattice Monte Carlo simulations) only in certain regimes. Refinements and extensions including more elaborate forces and pore defects are also discussed.

  10. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  11. Detecting P and S-wave of Mt. Rinjani seismic based on a locally stationary autoregressive (LSAR) model

    NASA Astrophysics Data System (ADS)

    Nurhaida, Subanar, Abdurakhman, Abadi, Agus Maman

    2017-08-01

    Seismic data is usually modelled using autoregressive processes. The aim of this paper is to find the arrival times of the seismic waves of Mt. Rinjani in Indonesia. Kitagawa algorithm's is used to detect the seismic P and S-wave. Householder transformation used in the algorithm made it effectively finding the number of change points and parameters of the autoregressive models. The results show that the use of Box-Cox transformation on the variable selection level makes the algorithm works well in detecting the change points. Furthermore, when the basic span of the subinterval is set 200 seconds and the maximum AR order is 20, there are 8 change points which occur at 1601, 2001, 7401, 7601,7801, 8001, 8201 and 9601. Finally, The P and S-wave arrival times are detected at time 1671 and 2045 respectively using a precise detection algorithm.

  12. Evidence for the contribution of a threshold retrieval process to semantic memory.

    PubMed

    Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A

    2017-10-01

    It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.

  13. Modeling post-wildfire hydrological processes with ParFlow

    NASA Astrophysics Data System (ADS)

    Escobar, I. S.; Lopez, S. R.; Kinoshita, A. M.

    2017-12-01

    Wildfires alter the natural processes within a watershed, such as surface runoff, evapotranspiration rates, and subsurface water storage. Post-fire hydrologic models are typically one-dimensional, empirically-based models or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful for modeling and predictions at the watershed outlet; however, do not provide detailed, distributed hydrologic processes at the point scale within the watershed. This research uses ParFlow, a three-dimensional, distributed hydrologic model to simulate post-fire hydrologic processes by representing the spatial and temporal variability of soil burn severity (via hydrophobicity) and vegetation recovery. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This work builds upon previous field and remote sensing analysis conducted for the 2003 Old Fire Burn in Devil Canyon, located in southern California (USA). This model is initially developed for a hillslope defined by a 500 m by 1000 m lateral extent. The subsurface reaches 12.4 m and is assigned a variable cell thickness to explicitly consider soil burn severity throughout the stages of recovery and vegetation regrowth. We consider four slope and eight hydrophobic layer configurations. Evapotranspiration is used as a proxy for vegetation regrowth and is represented by the satellite-based Simplified Surface Energy Balance (SSEBOP) product. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated at the point scale. Results will be used as a basis for developing and fine-tuning a watershed-scale model. Long-term simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management. In reference to the presenter, Isabel Escobar: Research is funded by the NASA-DIRECT STEM Program. Travel expenses for this presentation is funded by CSU-LSAMP. CSU-LSAMP is supported by the National Science Foundation under Grant # HRD-1302873 and the CSU Office of Chancellor.

  14. SEIR Model of Rumor Spreading in Online Social Network with Varying Total Population Size

    NASA Astrophysics Data System (ADS)

    Dong, Suyalatu; Deng, Yan-Bin; Huang, Yong-Chang

    2017-10-01

    Based on the infectious disease model with disease latency, this paper proposes a new model for the rumor spreading process in online social network. In this paper what we establish an SEIR rumor spreading model to describe the online social network with varying total number of users and user deactivation rate. We calculate the exact equilibrium points and reproduction number for this model. Furthermore, we perform the rumor spreading process in the online social network with increasing population size based on the original real world Facebook network. The simulation results indicate that the SEIR model of rumor spreading in online social network with changing total number of users can accurately reveal the inherent characteristics of rumor spreading process in online social network. Supported by National Natural Science Foundation of China under Grant Nos. 11275017 and 11173028

  15. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  16. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  17. Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment

    NASA Astrophysics Data System (ADS)

    Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-04-01

    Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and prediction of morphodynamics in a natural coastal system in response to climate change" (Steno Grant no. 10-081102) and by the Geocenter Denmark through the project "Closing the gap! - Coherent land-water environmental mapping (LAWA)" (Grant no. 4-2015).

  18. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.

  19. Ecotoxicology and spatial modeling in population dynamics: an illustration with brown trout.

    PubMed

    Chaumot, Arnaud; Charles, Sandrine; Flammarion, Patrick; Auger, Pierre

    2003-05-01

    We developed a multiregion matrix population model to explore how the demography of a hypothetical brown trout population living in a river network varies in response to different spatial scenarios of cadmium contamination. Age structure, spatial distribution, and demographic and migration processes are taken into account in the model. Chronic or acute cadmium concentrations affect the demographic parameters at the scale of the river range. The outputs of the model constitute population-level end points (the asymptotic population growth rate, the stable age structure, and the asymptotic spatial distribution) that allow comparing the different spatial scenarios of contamination regarding the demographic response at the scale of the whole river network. An analysis of the sensitivity of these end points to lower order parameters enables us to link the local effects of cadmium to the global demographic behavior of the brown trout population. Such a link is of broad interest in the point of view of ecotoxicological management.

  20. Source Process of the 2007 Niigata-ken Chuetsu-oki Earthquake Derived from Near-fault Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Sekiguchi, H.; Morikawa, N.; Ozawa, T.; Kunugi, T.; Shirasaka, M.

    2007-12-01

    The 2007 Niigata-ken Chuetsu-oki earthquake occurred on July 16th, 2007, 10:13 JST. We performed a multi- time window linear waveform inversion analysis (Hartzell and Heaton, 1983) to estimate the rupture process from the near fault strong motion data of 14 stations from K-NET, KiK-net, F-net, JMA, and Niigata prefecture. The fault plane for the mainshock has not been clearly determined yet from the aftershock distribution, so that we performed two waveform inversions for north-west dipping fault (Model A) and south-east dipping fault (Model B). Their strike, dip, and rake are set to those of the moment tensor solutions by F-net. Fault plane model of 30 km length by 24 km width is set to cover aftershock distribution within 24 hours after the mainshock. Theoretical Green's functions were calculated by the discrete wavenumber method (Bouchon, 1981) and the R/T matrix method (Kennett, 1983) with the different stratified medium for each station based on the velocity structure including the information form the reflection survey and borehole logging data. Convolution of moving dislocation was introduced to represent the rupture propagation in an each subfault (Sekiguchi et al., 2002). The observed acceleration records were integrated into velocity except of F-net velocity data, and bandpass filtered between 0.1 and 1.0 Hz. We solved least-squared equation to obtain slip amount of each time window on each subfault to minimize squared residual of the waveform fitting between observed and synthetic waveforms. Both models provide moment magnitudes of 6.7. Regarding Model A, we obtained large slip in the south-west deeper part of the rupture starting point, which is close to Kashiwazaki-city. The second or third velocity pulses of observed velocity waveforms seem to be composed of slip from the asperity. Regarding Model B, we obtained large slip in the southwest shallower part of the rupture starting point, which is also close to Kashiwazaki-city. In both models, we found small slip near the rupture starting point, and largest slip at about ten kilometer in the south-west of the rupture starting point with the maximum slip of 2.3 and 2.5 m for Models A and B, respectively. The difference of the residual between observed and synthetic waveforms for both models is not significant, therefore it is difficult to conclude which fault plane is appropriate to explain. The estimated large-slip regions in the inverted source models with the Models A and B are located near the cross point of the two fault plane models, which should have similar radiation pattern. This situation may be one of the reasons why judgment of the fault plane orientation is such difficult. We need careful examinations not only strong motion data but also geodetic data to further explore the fault orientation and the source process of this earthquake.

  1. Parametric Accuracy: Building Information Modeling Process Applied to the Cultural Heritage Preservation

    NASA Astrophysics Data System (ADS)

    Garagnani, S.; Manferdini, A. M.

    2013-02-01

    Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.

  2. Information technology in the foxhole.

    PubMed

    Eyestone, S M

    1995-08-01

    The importance of digital data capture at the point of health care service within the military environment is highlighted. Current paper-based data capture does not allow for efficient data reuse throughout the medical support information domain. A simple, high-level process and data flow model is used to demonstrate the importance of data capture at point of service. The Department of Defense is developing a personal digital assistant, called MEDTAG, that accomplishes point of service data capture in the field using a prototype smart card as a data store in austere environments.

  3. Quadratic band touching points and flat bands in two-dimensional topological Floquet systems

    NASA Astrophysics Data System (ADS)

    Du, Liang; Zhou, Xiaoting; Fiete, Gregory A.

    2017-01-01

    In this paper we theoretically study, using Floquet-Bloch theory, the influence of circularly and linearly polarized light on two-dimensional band structures with Dirac and quadratic band touching points, and flat bands, taking the nearest neighbor hopping model on the kagome lattice as an example. We find circularly polarized light can invert the ordering of this three-band model, while leaving the flat band dispersionless. We find a small gap is also opened at the quadratic band touching point by two-photon and higher order processes. By contrast, linearly polarized light splits the quadratic band touching point (into two Dirac points) by an amount that depends only on the amplitude and polarization direction of the light, independent of the frequency, and generally renders dispersion to the flat band. The splitting is perpendicular to the direction of the polarization of the light. We derive an effective low-energy theory that captures these key results. Finally, we compute the frequency dependence of the optical conductivity for this three-band model and analyze the various interband contributions of the Floquet modes. Our results suggest strategies for optically controlling band structure and interaction strength in real systems.

  4. Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions

    NASA Astrophysics Data System (ADS)

    Nabi, Jameel-Un; Böyükata, Mahmut

    2017-01-01

    The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.

  5. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  6. The generation and accumulation of interstitial atoms and vacancies in alloys with L1{sub 2} superstructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantyukhova, Olga, E-mail: Pantyukhova@list.ru; Starenchenko, Vladimir, E-mail: star@tsuab.ru; Starenchenko, Svetlana, E-mail: sve-starenchenko@yandex.ru

    2016-01-15

    The dependences of the point defect concentration (interstitial atoms and vacancies) on the deformation degree were calculated for the L1{sub 2} alloys with the high and low antiphase boundaries (APB) energy in terms of the mathematical model of the work and thermal strengthening of the alloys with the L1{sub 2} structure; the concentration of the point defects generated and annihilated in the process of deformation was estimated. It was found that the main part of the point defects generating during plastic deformation annihilates, the residual density of the deformation point defects does not exceed 10{sup −5}.

  7. Point Cloud Analysis for Conservation and Enhancement of Modernist Architecture

    NASA Astrophysics Data System (ADS)

    Balzani, M.; Maietti, F.; Mugayar Kühl, B.

    2017-02-01

    Documentation of cultural assets through improved acquisition processes for advanced 3D modelling is one of the main challenges to be faced in order to address, through digital representation, advanced analysis on shape, appearance and conservation condition of cultural heritage. 3D modelling can originate new avenues in the way tangible cultural heritage is studied, visualized, curated, displayed and monitored, improving key features such as analysis and visualization of material degradation and state of conservation. An applied research focused on the analysis of surface specifications and material properties by means of 3D laser scanner survey has been developed within the project of Digital Preservation of FAUUSP building, Faculdade de Arquitetura e Urbanismo da Universidade de São Paulo, Brazil. The integrated 3D survey has been performed by the DIAPReM Center of the Department of Architecture of the University of Ferrara in cooperation with the FAUUSP. The 3D survey has allowed the realization of a point cloud model of the external surfaces, as the basis to investigate in detail the formal characteristics, geometric textures and surface features. The digital geometric model was also the basis for processing the intensity values acquired by laser scanning instrument; this method of analysis was an essential integration to the macroscopic investigations in order to manage additional information related to surface characteristics displayable on the point cloud.

  8. Fragmentation approach to the point-island model with hindered aggregation: Accessing the barrier energy.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2017-07-01

    We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i. In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length l_{a}. For l_{a}=0 the islands behave as perfect sinks while for l_{a}→∞ they behave as reflecting boundaries. For intermediate values of l_{a}, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and l_{a}. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and l_{a}). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of l_{a} and i=1, 2, and 3.

  9. Effects of LiDAR point density and landscape context on the retrieval of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, K. K.; Chen, G.; McCarter, J. B.; Meentemeyer, R. K.

    2014-12-01

    Light Detection and Ranging (LiDAR), as an alternative to conventional optical remote sensing, is being increasingly used to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and better data accuracies, which however pose challenges to the procurement and processing of LiDAR data for large-area assessments. Reducing point density cuts data acquisition costs and overcome computational challenges for broad-scale forest management. However, how does that impact the accuracy of biomass estimation in an urban environment containing a great level of anthropogenic disturbances? The main goal of this study is to evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing regions of Charlotte, North Carolina, USA. We used multiple linear regression to establish the statistical relationship between field-measured biomass and predictor variables (PVs) derived from LiDAR point clouds with varying densities. We compared the estimation accuracies between the general Urban Forest models (no discrimination of forest type) and the Forest Type models (evergreen, deciduous, and mixed), which was followed by quantifying the degree to which landscape context influenced biomass estimation. The explained biomass variance of Urban Forest models, adjusted R2, was fairly consistent across the reduced point densities with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models using two representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, signifying the distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest biomass assessment without compromising the accuracy of estimation, which may further be improved using development density.

  10. Using Quasi-Horizontal Alignment in the absence of the actual alignment.

    PubMed

    Banihashemi, Mohamadreza

    2016-10-01

    Horizontal alignment is a major roadway characteristic used in safety and operational evaluations of many facility types. The Highway Safety Manual (HSM) uses this characteristic in crash prediction models for rural two-lane highways, freeway segments, and freeway ramps/C-D roads. Traffic simulation models use this characteristic in their processes on almost all types of facilities. However, a good portion of roadway databases do not include horizontal alignment data; instead, many contain point coordinate data along the roadways. SHRP 2 Roadway Information Database (RID) is a good example of this type of data. Only about 5% of this geodatabase contains alignment information and for the rest, point data can easily be produced. Even though the point data can be used to extract actual horizontal alignment data but, extracting horizontal alignment is a cumbersome and costly process, especially for a database of miles and miles of highways. This research introduces a so called "Quasi-Horizontal Alignment" that can be produced easily and automatically from point coordinate data and can be used in the safety and operational evaluations of highways. SHRP 2 RID for rural two-lane highways in Washington State is used in this study. This paper presents a process through which Quasi-Horizontal Alignments are produced from point coordinates along highways by using spreadsheet software such as MS EXCEL. It is shown that the safety and operational evaluations of the highways with Quasi-Horizontal Alignments are almost identical to the ones with the actual alignments. In the absence of actual alignment the Quasi-Horizontal Alignment can easily be produced from any type of databases that contain highway coordinates such geodatabases and digital maps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Si amorphization by focused ion beam milling: Point defect model with dynamic BCA simulation and experimental validation.

    PubMed

    Huang, J; Loeffler, M; Muehle, U; Moeller, W; Mulders, J J L; Kwakman, L F Tz; Van Dorp, W F; Zschech, E

    2018-01-01

    A Ga focused ion beam (FIB) is often used in transmission electron microscopy (TEM) analysis sample preparation. In case of a crystalline Si sample, an amorphous near-surface layer is formed by the FIB process. In order to optimize the FIB recipe by minimizing the amorphization, it is important to predict the amorphous layer thickness from simulation. Molecular Dynamics (MD) simulation has been used to describe the amorphization, however, it is limited by computational power for a realistic FIB process simulation. On the other hand, Binary Collision Approximation (BCA) simulation is able and has been used to simulate ion-solid interaction process at a realistic scale. In this study, a Point Defect Density approach is introduced to a dynamic BCA simulation, considering dynamic ion-solid interactions. We used this method to predict the c-Si amorphization caused by FIB milling on Si. To validate the method, dedicated TEM studies are performed. It shows that the amorphous layer thickness predicted by the numerical simulation is consistent with the experimental data. In summary, the thickness of the near-surface Si amorphization layer caused by FIB milling can be well predicted using the Point Defect Density approach within the dynamic BCA model. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  13. Tuition and Memory: mental models and cognitive processing in Japanese children's work on d.c. electrical circuits

    NASA Astrophysics Data System (ADS)

    Asami, Noriaki; King, Julien; Monk, Martin

    2000-02-01

    This paper looks at the familiar problem of students' understanding of elementary electrical circuits from a much neglected point of view. It is conjectured that the patterning commonly found in students' ideas might have its roots in the cognitive processing with which students operate their mental models of d.c. electrical circuits. The data are new and come from Japanese 10-11 year olds living in the UK. Progressive analysis of these students' answers to a six item test shows that the percentage of students operating particular mental models, following tuition, matches the percentages one might expect from a knowledge of their cognitive processing.

  14. Genome-Scale Analysis of Translation Elongation with a Ribosome Flow Model

    PubMed Central

    Meilijson, Isaac; Kupiec, Martin; Ruppin, Eytan

    2011-01-01

    We describe the first large scale analysis of gene translation that is based on a model that takes into account the physical and dynamical nature of this process. The Ribosomal Flow Model (RFM) predicts fundamental features of the translation process, including translation rates, protein abundance levels, ribosomal densities and the relation between all these variables, better than alternative (‘non-physical’) approaches. In addition, we show that the RFM can be used for accurate inference of various other quantities including genes' initiation rates and translation costs. These quantities could not be inferred by previous predictors. We find that increasing the number of available ribosomes (or equivalently the initiation rate) increases the genomic translation rate and the mean ribosome density only up to a certain point, beyond which both saturate. Strikingly, assuming that the translation system is tuned to work at the pre-saturation point maximizes the predictive power of the model with respect to experimental data. This result suggests that in all organisms that were analyzed (from bacteria to Human), the global initiation rate is optimized to attain the pre-saturation point. The fact that similar results were not observed for heterologous genes indicates that this feature is under selection. Remarkably, the gap between the performance of the RFM and alternative predictors is strikingly large in the case of heterologous genes, testifying to the model's promising biotechnological value in predicting the abundance of heterologous proteins before expressing them in the desired host. PMID:21909250

  15. Processing mechanics of alternate twist ply (ATP) yarn technology

    NASA Astrophysics Data System (ADS)

    Elkhamy, Donia Said

    Ply yarns are important in many textile manufacturing processes and various applications. The primary process used for producing ply yarns is cabling. The speed of cabling is limited to about 35m/min. With the world's increasing demands of ply yarn supply, cabling is incompatible with today's demand activated manufacturing strategies. The Alternate Twist Ply (ATP) yarn technology is a relatively new process for producing ply yarns with improved productivity and flexibility. This technology involves self plying of twisted singles yarn to produce ply yarn. The ATP process can run more than ten times faster than cabling. To implement the ATP process to produce ply yarns there are major quality issues; uniform Twist Profile and yarn Twist Efficiency. The goal of this thesis is to improve these issues through process modeling based on understanding the physics and processing mechanics of the ATP yarn system. In our study we determine the main parameters that control the yarn twist profile. Process modeling of the yarn twist across different process zones was done. A computational model was designed to predict the process parameters required to achieve a square wave twist profile. Twist efficiency, a measure of yarn torsional stability and bulk, is determined by the ratio of ply yarn twist to singles yarn twist. Response Surface Methodology was used to develop the processing window that can reproduce ATP yarns with high twist efficiency. Equilibrium conditions of tensions and torques acting on the yarns at the self ply point were analyzed and determined the pathway for achieving higher twist efficiency. Mechanistic modeling relating equilibrium conditions to the twist efficiency was developed. A static tester was designed to zoom into the self ply zone of the ATP yarn. A computer controlled, prototypic ATP machine was constructed and confirmed the mechanistic model results. Optimum parameters achieving maximum twist efficiency were determined in this study. The successful results of this work have led to the filing of a US patent disclosing the method for producing ATP yarns with high yarn twist efficiency using a high convergence angle at the self ply point together with applying ply torque.

  16. Monitoring urban subsidence based on SAR lnterferometric point target analysis

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.

    2009-01-01

    lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.

  17. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  18. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    NASA Astrophysics Data System (ADS)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  19. Physical Interpretation of Mixing Diagrams

    NASA Astrophysics Data System (ADS)

    Khain, Alexander; Pinsky, Mark; Magaritz-Ronen, L.

    2018-01-01

    Type of mixing at cloud edges is often determined by means of mixing diagrams showing the dependence of normalized cube of the mean volume radius on the dilution level. The mixing diagrams correspond to the final equilibrium state of mixing between two air volumes. While interpreting in situ measurements, scattering diagrams are plotted in which normalized droplet concentration is used instead of dilution level. Utilization of such scattering diagrams for interpretation of in situ observations faces significant difficulties and often leads to misinterpretation of the mixing process and to uncertain conclusions concerning the mixing type. In this study we analyze the scattering diagrams obtained by means of a Lagrangian-Eulerian model of a stratocumulus cloud. The model consists of 2,000 interacting Largangian parcels which mix with their neighbors during their motion in the atmospheric boundary layer. In the diagram, each parcel is denoted by a point. Changes of microphysical parameters of the parcel are represented by movements of the point in the scattering diagram. The method of plotting the scattering diagrams using the model is in many aspects similar to that used in in situ measurements. It is shown that a scattering diagram shows snapshots of a transient mixing process. The location of points in the scattering diagrams reflects largely the history and the origin of air parcels. Location of points on scattering diagram characterizes intensity of entrainment, and different parameters of droplet size distributions (DSDs) like concentration, mean volume (or effective) radius, and DSD width.

  20. Outbreak statistics and scaling laws for externally driven epidemics.

    PubMed

    Singh, Sarabjeet; Myers, Christopher R

    2014-04-01

    Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.

  1. Single- and Dual-Process Models of Biased Contingency Detection

    PubMed Central

    2016-01-01

    Abstract. Decades of research in causal and contingency learning show that people’s estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive. PMID:27025532

  2. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  3. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  4. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  5. A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.

    PubMed

    Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul

    2017-07-01

    We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.

  6. An open, interoperable, transdisciplinary approach to a point cloud data service using OGC standards and open source software.

    NASA Astrophysics Data System (ADS)

    Steer, Adam; Trenham, Claire; Druken, Kelsey; Evans, Benjamin; Wyborn, Lesley

    2017-04-01

    High resolution point clouds and other topology-free point data sources are widely utilised for research, management and planning activities. A key goal for research and management users is making these data and common derivatives available in a way which is seamlessly interoperable with other observed and modelled data. The Australian National Computational Infrastructure (NCI) stores point data from a range of disciplines, including terrestrial and airborne LiDAR surveys, 3D photogrammetry, airborne and ground-based geophysical observations, bathymetric observations and 4D marine tracers. These data are stored alongside a significant store of Earth systems data including climate and weather, ecology, hydrology, geoscience and satellite observations, and available from NCI's National Environmental Research Data Interoperability Platform (NERDIP) [1]. Because of the NERDIP requirement for interoperability with gridded datasets, the data models required to store these data may not conform to the LAS/LAZ format - the widely accepted community standard for point data storage and transfer. The goal for NCI is making point data discoverable, accessible and useable in ways which allow seamless integration with earth observation datasets and model outputs - in turn assisting researchers and decision-makers in the often-convoluted process of handling and analyzing massive point datasets. With a use-case of providing a web data service and supporting a derived product workflow, NCI has implemented and tested a web-based point cloud service using the Open Geospatial Consortium (OGC) Web Processing Service [2] as a transaction handler between a web-based client and server-side computing tools based on a native Linux operating system. Using this model, the underlying toolset for driving a data service is flexible and can take advantage of NCI's highly scalable research cloud. Present work focusses on the Point Data Abstraction Library (PDAL) [3] as a logical choice for efficiently handling LAS/LAZ based point workflows, and native HDF5 libraries for handling point data kept in HDF5-based structures (eg NetCDF4, SPDlib [4]). Points stored in database tables (eg postgres-pointcloud [5]) will be considered as testing continues. Visualising and exploring massive point datasets in a web browser alongside multiple datasets has been demonstrated by the entwine-3D tiles project [6]. This is a powerful interface which enables users to investigate and select appropriate data, and is also being investigated as a potential front-end to a WPS-based point data service. In this work we show preliminary results for a WPS-based point data access system, in preparation for demonstration at FOSS4G 2017, Boston (http://2017.foss4g.org/) [1] http://nci.org.au/data-collections/nerdip/ [2] http://www.opengeospatial.org/standards/wps [3] http://www.pdal.io [4] http://www.spdlib.org/doku.php [5] https://github.com/pgpointcloud/pointcloud [6] http://cesium.entwine.io

  7. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  8. A New Era in Geodesy and Cartography: Implications for Landing Site Operations

    NASA Technical Reports Server (NTRS)

    Duxbury, T. C.

    2001-01-01

    The Mars Global Surveyor (MGS) Mars Orbiter Laser Altimeter (MOLA) global dataset has ushered in a new era for Mars local and global geodesy and cartography. These data include the global digital terrain model (Digital Terrain Model (DTM) radii), the global digital elevation model (Digital Elevation Model (DEM) elevation with respect to the geoid), and the higher spatial resolution individual MOLA ground tracks. Currently there are about 500,000,000 MOLA points and this number continues to grow as MOLA continues successful operations in orbit about Mars, the combined processing of radiometric X-band Doppler and ranging tracking of MGS together with millions of MOLA orbital crossover points has produced global geodetic and cartographic control having a spatial (latitude/longitude) accuracy of a few meters and a topographic accuracy of less than 1 meter. This means that the position of an individual MOLA point with respect to the center-of-mass of Mars is know to an absolute accuracy of a few meters. The positional accuracy of this point in inertial space over time is controlled by the spin rate uncertainty of Mars which is less than 1 km over 10 years that will be improved significantly with the next landed mission.

  9. A Vertically Resolved Planetary Boundary Layer

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1984-01-01

    Increase of the vertical resolution of the GLAS Fourth Order General Circulation Model (GCM) near the Earth's surface and installation of a new package of parameterization schemes for subgrid-scale physical processes were sought so that the GLAS Model GCM will predict the resolved vertical structure of the planetary boundary layer (PBL) for all grid points.

  10. Improving the quality of extracting dynamics from interspike intervals via a resampling approach

    NASA Astrophysics Data System (ADS)

    Pavlova, O. N.; Pavlov, A. N.

    2018-04-01

    We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.

  11. The Reconstruction of Three-Dimensional Morphological and Electrical Paraneters from Two-Dimensional Sections of Neurones

    NASA Astrophysics Data System (ADS)

    Brawn, A. D.; Wheal, H. V.

    1986-07-01

    A system is described which can be used to create a three-dimensional model of a neurone from the central nervous system. This model can then be used to obtain quantitative data on the physical and electrical pro, perties of the neurone. Living neurones are either raised in culture, or taken from in vitro preparations of brain tissue and optically sectioned. These two-dimensional sections are digitised, and input to a 68008-based microcomputer. The system reconstructs the three-dimensional structure of the neurone, both geanetrically and electrically. The user can a) View the structure fran any point at any angle b) "Move through" the structure along any given vector c) Nave through" the structure following a neurone process d) Fire the neurone at any point, and "watch" the action potentials propagate e) Vary the parameters of the electrical model of a process element. The system is targeted to a research programme on epilepsy, which makes frequent use of both geometric and electrical neurone modelling. Current techniques which may involve crude histology and two-dimensional drawings have considerable short camings.

  12. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  13. Mathematical modeling of transformation process of structurally unstable magnetic configurations into structurally stable ones in two-dimensional and three-dimensional geometry

    NASA Astrophysics Data System (ADS)

    Inovenkov, Igor; Echkina, Eugenia; Ponomarenko, Loubov

    Magnetic reconnection is a fundamental process in astrophysical, space and laboratory plasma. In essence, it represents a change of topology of the magnetic field caused by readjustment of the structure of the magnetic field lines. This change leads to release of energy accumulated in the field. We consider transformation process of structurally unstable magnetic configurations into the structurally steady ones from the point of view of the Catastrophe theory. Special attention is paid to modeling of evolution of the structurally unstable three-dimensional magnetic fields.

  14. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  15. Conceptualizing Stakeholders' Perceptions of Ecosystem Services: A Participatory Systems Mapping Approach

    NASA Astrophysics Data System (ADS)

    Lopes, Rita; Videira, Nuno

    2015-12-01

    A participatory system dynamics modelling approach is advanced to support conceptualization of feedback processes underlying ecosystem services and to foster a shared understanding of leverage intervention points. The process includes systems mapping workshop and follow-up tasks aiming at the collaborative construction of causal loop diagrams. A case study developed in a natural area in Portugal illustrates how a stakeholder group was actively engaged in the development of a conceptual model depicting policies for sustaining the climate regulation ecosystem service.

  16. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

  17. Telemedicine and distributed medical intelligence.

    PubMed

    Warner, D; Tichenor, J M; Balch, D C

    1996-01-01

    Recent trends in health care informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. The authors present a new model of health care information, distributed medical intelligence, which promotes the development of an integrative medical communication system addressing the process of providing expert medical knowledge to the point of need. The model incorporates audio, video, high-resolution still images, and virtual reality applications into an integrated medical communications network. Three components of the model (care portals, Docking Station, and the bridge) are described. The implementation of this model at the East Carolina University School of Medicine is also outlined.

  18. Interpersonal Emotion Regulation Model of Mood and Anxiety Disorders

    PubMed Central

    Hofmann, Stefan G.

    2014-01-01

    Although social factors are of critical importance in the development and maintenance of emotional disorders, the contemporary view of emotion regulation has been primarily limited to intrapersonal processes. Based on diverse perspectives pointing to the communicative function of emotions, the social processes in self-regulation, and the role of social support, this article presents an interpersonal model of emotion regulation of mood and anxiety disorders. This model provides a theoretical framework to understand and explain how mood and anxiety disorders are regulated and maintained through others. The literature, which provides support for the model, is reviewed and the clinical implications are discussed. PMID:25267867

  19. Text vectorization based on character recognition and character stroke modeling

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao

    2014-03-01

    In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.

  20. Comparision of photogrammetric point clouds with BIM building elements for construction progress monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2014-08-01

    For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.

  1. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  2. TLSpy: An Open-Source Addition to Terrestrial Lidar Workflows

    NASA Astrophysics Data System (ADS)

    Frechette, J. D.; Weissmann, G. S.; Wawrzyniec, T. F.

    2008-12-01

    Terrestrial lidar scanners (TLS) that capture three dimensional (3D) geometry with cm scale precision present many new opportunities in the Earth Sciences and related fields. However, the lack of domain specific tools impedes full and efficient utilization of the information contained in these datasets. Most processing and analysis is performed using a variety of manufacturing, surveying, airborne lidar, and GIS software. Although much overlap exists, inevitably some needs are not addressed by these applications. TLSpy provides a plugin driven framework with 3D visualization capabilities that encourages researchers to fill these gaps. The goal is to free researchers from the intellectual overhead imposed by user and data interface design, enabling rapid development of TLS specific processing and analysis algorithms. We present two plugins as examples of problems that TLSpy is being applied to. The first plugin corrects for the strong influence of target orientation on TLS measured reflectance intensities. It calculates the distribution of incidence angles and intensities in an input scan and assists the user in fitting a reflectance model to the distribution. The model is then used to normalize input intensities, minimizing the impact of surface orientation and simplifying the extraction of quantitative data from reflectance measurements. Although reasonable default models can be determined the large number of factors influencing reflectance values require that the plugin be designed for maximum flexibility, allowing the user to adjust all model parameters and define new reflectance models as needed. The second plugin helps eliminate multipath reflections from water surfaces. Characterized by a lower intensity mirror image of the subaerial bank appearing below the water surface, these reflections are a common problem in scans containing water. These erroneous reflections can be removed by manually selecting points that lie on the waterline, fitting a plane to the points, and deleting points below that plane. This plugin simplifies the process by automatically identifying waterline points using characteristic changes in geometry and intensity. Automatic identification is often faster and more reliable than manual identification, however, manual control is retained as a fallback for degenerate cases.

  3. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  4. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Modeling of heat transfer in compacted machining chips during friction consolidation process

    NASA Astrophysics Data System (ADS)

    Abbas, Naseer; Deng, Xiaomin; Li, Xiao; Reynolds, Anthony

    2018-04-01

    The current study aims to provide an understanding of the heat transfer process in compacted aluminum alloy AA6061 machining chips during the friction consolidation process (FCP) through experimental investigations and mathematical modelling and numerical simulation. Compaction and friction consolidation of machining chips is the first stage of the Friction Extrusion Process (FEP), which is a novel method for recycling machining chips to produce useful products such as wires. In this study, compacted machining chips are modelled as a continuum whose material properties vary with density during friction consolidation. Based on density and temperature dependent thermal properties, the temperature field in the chip material and process chamber caused by frictional heating during the friction consolidation process is predicted. The predicted temperature field is found to compare well with temperature measurements at select points where such measurements can be made using thermocouples.

  6. Application of Non-Kolmogorovian Probability and Quantum Adaptive Dynamics to Unconscious Inference in Visual Perception Process

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    2016-07-01

    Recently a novel quantum information formalism — quantum adaptive dynamics — was developed and applied to modelling of information processing by bio-systems including cognitive phenomena: from molecular biology (glucose-lactose metabolism for E.coli bacteria, epigenetic evolution) to cognition, psychology. From the foundational point of view quantum adaptive dynamics describes mutual adapting of the information states of two interacting systems (physical or biological) as well as adapting of co-observations performed by the systems. In this paper we apply this formalism to model unconscious inference: the process of transition from sensation to perception. The paper combines theory and experiment. Statistical data collected in an experimental study on recognition of a particular ambiguous figure, the Schröder stairs, support the viability of the quantum(-like) model of unconscious inference including modelling of biases generated by rotation-contexts. From the probabilistic point of view, we study (for concrete experimental data) the problem of contextuality of probability, its dependence on experimental contexts. Mathematically contextuality leads to non-Komogorovness: probability distributions generated by various rotation contexts cannot be treated in the Kolmogorovian framework. At the same time they can be embedded in a “big Kolmogorov space” as conditional probabilities. However, such a Kolmogorov space has too complex structure and the operational quantum formalism in the form of quantum adaptive dynamics simplifies the modelling essentially.

  7. Study on the high-frequency laser measurement of slot surface difference

    NASA Astrophysics Data System (ADS)

    Bing, Jia; Lv, Qiongying; Cao, Guohua

    2017-10-01

    In view of the measurement of the slot surface difference in the large-scale mechanical assembly process, Based on high frequency laser scanning technology and laser detection imaging principle, This paragraph designs a double galvanometer pulse laser scanning system. Laser probe scanning system architecture consists of three parts: laser ranging part, mechanical scanning part, data acquisition and processing part. The part of laser range uses high-frequency laser range finder to measure the distance information of the target shape and get a lot of point cloud data. Mechanical scanning part includes high-speed rotary table, high-speed transit and related structure design, in order to realize the whole system should be carried out in accordance with the design of scanning path on the target three-dimensional laser scanning. Data processing part mainly by FPGA hardware with LAbVIEW software to design a core, to process the point cloud data collected by the laser range finder at the high-speed and fitting calculation of point cloud data, to establish a three-dimensional model of the target, so laser scanning imaging is realized.

  8. Inhomogeneous Point-Processes to Instantaneously Assess Affective Haptic Perception through Heartbeat Dynamics Information

    NASA Astrophysics Data System (ADS)

    Valenza, G.; Greco, A.; Citi, L.; Bianchi, M.; Barbieri, R.; Scilingo, E. P.

    2016-06-01

    This study proposes the application of a comprehensive signal processing framework, based on inhomogeneous point-process models of heartbeat dynamics, to instantaneously assess affective haptic perception using electrocardiogram-derived information exclusively. The framework relies on inverse-Gaussian point-processes with Laguerre expansion of the nonlinear Wiener-Volterra kernels, accounting for the long-term information given by the past heartbeat events. Up to cubic-order nonlinearities allow for an instantaneous estimation of the dynamic spectrum and bispectrum of the considered cardiovascular dynamics, as well as for instantaneous measures of complexity, through Lyapunov exponents and entropy. Short-term caress-like stimuli were administered for 4.3-25 seconds on the forearms of 32 healthy volunteers (16 females) through a wearable haptic device, by selectively superimposing two levels of force, 2 N and 6 N, and two levels of velocity, 9.4 mm/s and 65 mm/s. Results demonstrated that our instantaneous linear and nonlinear features were able to finely characterize the affective haptic perception, with a recognition accuracy of 69.79% along the force dimension, and 81.25% along the velocity dimension.

  9. Stairs or escalator? Using theories of persuasion and motivation to facilitate healthy decision making.

    PubMed

    Suri, Gaurav; Sheppes, Gal; Leslie, Sara; Gross, James J

    2014-12-01

    To encourage an increase in daily activity, researchers have tried a variety of health-related communications, but with mixed results. In the present research-using the stair escalator choice context-we examined predictions derived from the Heuristic Systematic Model (HSM), Self Determination Theory (SDT), and related theories. Specifically, we tested whether (as predicted by HSM) signs that encourage heuristic processing ("Take the Stairs") would have greatest impact when placed at the stair/escalator point of choice (when processing time is limited), whereas signs that encourage systematic processing ("Will You Take the Stairs?") would have greatest impact when placed at some distance from the point of choice (when processing time is less limited). We also tested whether (as predicted by SDT) messages promoting autonomy would be more likely to result in sustained motivated behavior (i.e., stair taking at subsequent uncued choice points) than messages that use commands. A series of studies involving more than 9,000 pedestrians provided support for these predictions. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  10. Optimization of the transition path of the head hardening with using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Wróbel, Joanna; Kulawik, Adam

    2016-06-01

    An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.

  11. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  12. A Robust False Matching Points Detection Method for Remote Sensing Image Registration

    NASA Astrophysics Data System (ADS)

    Shan, X. J.; Tang, P.

    2015-04-01

    Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.

  13. Multimodel assessment of the upper troposphere and lower stratosphere: Tropics and global trends

    NASA Astrophysics Data System (ADS)

    Gettelman, A.; Hegglin, M. I.; Son, S.-W.; Kim, J.; Fujiwara, M.; Birner, T.; Kremser, S.; Rex, M.; AñEl, J. A.; Akiyoshi, H.; Austin, J.; Bekki, S.; Braesike, P.; Brühl, C.; Butchart, N.; Chipperfield, M.; Dameris, M.; Dhomse, S.; Garny, H.; Hardiman, S. C.; JöCkel, P.; Kinnison, D. E.; Lamarque, J. F.; Mancini, E.; Marchand, M.; Michou, M.; Morgenstern, O.; Pawson, S.; Pitari, G.; Plummer, D.; Pyle, J. A.; Rozanov, E.; Scinocca, J.; Shepherd, T. G.; Shibata, K.; Smale, D.; TeyssèDre, H.; Tian, W.

    2010-01-01

    The performance of 18 coupled Chemistry Climate Models (CCMs) in the Tropical Tropopause Layer (TTL) is evaluated using qualitative and quantitative diagnostics. Trends in tropopause quantities in the tropics and the extratropical Upper Troposphere and Lower Stratosphere (UTLS) are analyzed. A quantitative grading methodology for evaluating CCMs is extended to include variability and used to develop four different grades for tropical tropopause temperature and pressure, water vapor and ozone. Four of the 18 models and the multi-model mean meet quantitative and qualitative standards for reproducing key processes in the TTL. Several diagnostics are performed on a subset of the models analyzing the Tropopause Inversion Layer (TIL), Lagrangian cold point and TTL transit time. Historical decreases in tropical tropopause pressure and decreases in water vapor are simulated, lending confidence to future projections. The models simulate continued decreases in tropopause pressure in the 21st century, along with ˜1K increases per century in cold point tropopause temperature and 0.5-1 ppmv per century increases in water vapor above the tropical tropopause. TTL water vapor increases below the cold point. In two models, these trends are associated with 35% increases in TTL cloud fraction. These changes indicate significant perturbations to TTL processes, specifically to deep convective heating and humidity transport. Ozone in the extratropical lowermost stratosphere has significant and hemispheric asymmetric trends. O3 is projected to increase by nearly 30% due to ozone recovery in the Southern Hemisphere (SH) and due to enhancements in the stratospheric circulation. These UTLS ozone trends may have significant effects in the TTL and the troposphere.

  14. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  15. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  16. Recurrent landsliding of a high bank at Dunaszekcső, Hungary: Geodetic deformation monitoring and finite element modeling

    NASA Astrophysics Data System (ADS)

    Bányai, László; Mentes, Gyula; Újvári, Gábor; Kovács, Miklós; Czap, Zoltán; Gribovszki, Katalin; Papp, Gábor

    2014-04-01

    Five years of geodetic monitoring data at Dunaszekcső, Hungary, are processed to evaluate recurrent landsliding, which is a characteristic geomorphological process affecting the high banks of the Middle Danube valley in Hungary. The integrated geodetic observations provide accurate three dimensional coordinate time series, and these data are used to calculate the kinematic features of point movements and the rigid body behavior of point blocks. Additional datasets include borehole tiltmeter data and hydrological recordings of the Danube and soil water wells. These data, together with two dimensional final element analyses, are utilized to gain a better understanding of the physical, soil mechanical background and stability features of the high bank. Here we indicate that the main trigger of movements is changing groundwater levels, whose effect is an order of magnitude higher than that of river water level changes. Varying displacement rates of the sliding blocks are interpreted as having been caused by basal pore water pressure changes originating from shear zone volume changes, floods of the River Danube through later seepage and rain infiltration. Both data and modeling point to the complex nature of bank sliding at Dunaszekcső. Some features imply that the movements are rotational, some reveal slumping. By contrast, all available observational and modeling data point to the retrogressive development of the high bank at Dunaszekcső. Regarding mitigation, the detailed analysis of three basic parameters (the direction of displacement vectors, tilting, and the acceleration component of the kinematic function) is suggested because these parameters indicate the zone where the largest lateral displacements can be expected and point to the advent of the rapid landsliding phase that affects high banks along the River Danube.

  17. Evaluating the Variations in the Flood Susceptibility Maps Accuracies due to the Alterations in the Type and Extent of the Flood Inventory

    NASA Astrophysics Data System (ADS)

    Tehrany, M. Sh.; Jones, S.

    2017-10-01

    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.

  18. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment

    PubMed Central

    Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng

    2015-01-01

    The quick and accurate understanding of the ambient environment, which is composed of road curbs, vehicles, pedestrians, etc., is critical for developing intelligent vehicles. The road elements included in this work are road curbs and dynamic road obstacles that directly affect the drivable area. A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article. First, ground segmentation is performed via multi-feature extraction of the raw data grabbed by the Velodyne LIDAR to satisfy the requirement of online environment modeling. Curbs and dynamic road obstacles are detected and tracked in different manners. Curves are fitted for curb points, and points are clustered into bundles whose form and kinematics parameters are calculated. The Kalman filter is used to track dynamic obstacles, whereas the snake model is employed for curbs. Results indicate that the proposed framework is robust under various environments and satisfies the requirements for online processing. PMID:26404290

  19. Relation of perceived breathiness to laryngeal kinematics and acoustic measures based on computational modeling

    PubMed Central

    Samlan, Robin A.; Story, Brad H.; Bunton, Kate

    2014-01-01

    Purpose To determine 1) how specific vocal fold structural and vibratory features relate to breathy voice quality and 2) the relation of perceived breathiness to four acoustic correlates of breathiness. Method A computational, kinematic model of the vocal fold medial surfaces was used to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: vocal process separation, surface bulging, vibratory nodal point, and epilaryngeal constriction. Twelve naïve listeners rated breathiness of 364 samples relative to a reference. The degree of breathiness was then compared to 1) the underlying kinematic profile and 2) four acoustic measures: cepstral peak prominence (CPP), harmonics-to-noise ratio, and two measures of spectral slope. Results Vocal process separation alone accounted for 61.4% of the variance in perceptual rating. Adding nodal point ratio and bulging to the equation increased the explained variance to 88.7%. The acoustic measure CPP accounted for 86.7% of the variance in perceived breathiness, and explained variance increased to 92.6% with the addition of one spectral slope measure. Conclusions Breathiness ratings were best explained kinematically by the degree of vocal process separation and acoustically by CPP. PMID:23785184

  20. Robust hashing with local models for approximate similarity search.

    PubMed

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  1. Synthetic aperture radar and digital processing: An introduction

    NASA Technical Reports Server (NTRS)

    Dicenzo, A.

    1981-01-01

    A tutorial on synthetic aperture radar (SAR) is presented with emphasis on digital data collection and processing. Background information on waveform frequency and phase notation, mixing, Q conversion, sampling and cross correlation operations is included for clarity. The fate of a SAR signal from transmission to processed image is traced in detail, using the model of a single bright point target against a dark background. Some of the principal problems connected with SAR processing are also discussed.

  2. RFID in the blood supply chain--increasing productivity, quality and patient safety.

    PubMed

    Briggs, Lynne; Davis, Rodeina; Gutierrez, Alfonso; Kopetsky, Matthew; Young, Kassandra; Veeramani, Raj

    2009-01-01

    As part of an overall design of a new, standardized RFID-enabled blood transfusion medicine supply chain, an assessment was conducted for two hospitals: the University of Iowa Hospital and Clinics (UIHC) and Mississippi Baptist Health System (MBHS). The main objectives of the study were to assess RFID technological and economic feasibility, along with possible impacts to productivity, quality and patient safety. A step-by-step process analysis focused on the factors contributing to process "pain points" (errors, inefficiency, product losses). A process re-engineering exercise produced blueprints of RFID-enabled processes to alleviate or eliminate those pain-points. In addition, an innovative model quantifying the potential reduction in adverse patient effects as a result of RFID implementation was created, allowing improvement initiatives to focus on process areas with the greatest potential impact to patient safety. The study concluded that it is feasible to implement RFID-enabled processes, with tangible improvements to productivity and safety expected. Based on a comprehensive cost/benefit model, it is estimated for a large hospital (UIHC) to recover investment from implementation within two to three years, while smaller hospitals may need longer to realize ROI. More importantly, the study estimated that RFID technology could reduce morbidity and mortality effects substantially among patients receiving transfusions.

  3. The Power of Black and Latina/o Counterstories: Urban Families and College-Going Processes

    ERIC Educational Resources Information Center

    Knight, Michelle G.; Norton, Nadjwa E. L.; Bentley, Courtney C.; Dixon, Iris R.

    2004-01-01

    This article examines the diversity of practices utilized by working-class and poor black and Latina/o families to support their children's college-going processes. We employ the work of feminists, scholars of color, and critical ethnographers to critique the power undergirding the monolithic model establishing one entry point of parental…

  4. Testing Methodology in the Student Learning Process

    ERIC Educational Resources Information Center

    Gorbunova, Tatiana N.

    2017-01-01

    The subject of the research is to build methodologies to evaluate the student knowledge by testing. The author points to the importance of feedback about the mastering level in the learning process. Testing is considered as a tool. The object of the study is to create the test system models for defence practice problems. Special attention is paid…

  5. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  6. A method of computer aided design with self-generative models in NX Siemens environment

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.

    2015-11-01

    Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish which method of self-generative preparation is most preferred. It always depends on a problem complexity. The easiest way for such a model preparation is this with the parametric relations model whilst the hardest one is this with the OPEN API mechanism. From knowledge processing point of view the best choice is application of the knowledge fusion.

  7. Estimation of Phosphorus Emissions in the Upper Iguazu Basin (brazil) Using GIS and the More Model

    NASA Astrophysics Data System (ADS)

    Acosta Porras, E. A.; Kishi, R. T.; Fuchs, S.; Hilgert, S.

    2016-06-01

    Pollution emissions into the drainage basin have direct impact on surface water quality. These emissions result from human activities that turn into pollution loads when they reach the water bodies, as point or diffuse sources. Their pollution potential depends on the characteristics and quantity of the transported materials. The estimation of pollution loads can assist decision-making in basin management. Knowledge about the potential pollution sources allows for a prioritization of pollution control policies to achieve the desired water quality. Consequently, it helps avoiding problems such as eutrophication of water bodies. The focus of the research described in this study is related to phosphorus emissions into river basins. The study area is the upper Iguazu basin that lies in the northeast region of the State of Paraná, Brazil, covering about 2,965 km2 and around 4 million inhabitants live concentrated on just 16% of its area. The MoRE (Modeling of Regionalized Emissions) model was used to estimate phosphorus emissions. MoRE is a model that uses empirical approaches to model processes in analytical units, capable of using spatially distributed parameters, covering both, emissions from point sources as well as non-point sources. In order to model the processes, the basin was divided into 152 analytical units with an average size of 20 km2. Available data was organized in a GIS environment. Using e.g. layers of precipitation, the Digital Terrain Model from a 1:10000 scale map as well as soils and land cover, which were derived from remote sensing imagery. Further data is used, such as point pollution discharges and statistical socio-economic data. The model shows that one of the main pollution sources in the upper Iguazu basin is the domestic sewage that enters the river as point source (effluents of treatment stations) and/or as diffuse pollution, caused by failures of sanitary sewer systems or clandestine sewer discharges, accounting for about 56% of the emissions. Second significant shares of emissions come from direct runoff or groundwater, being responsible for 32% of the total emissions. Finally, agricultural erosion and industry pathways represent 12% of emissions. This study shows that MoRE is capable of producing valid emission calculation on a relatively reduced input data basis.

  8. Modelling the morphodynamics and co-evolution of coast and estuarine environments

    NASA Astrophysics Data System (ADS)

    Morris, Chloe; Coulthard, Tom; Parsons, Daniel R.; Manson, Susan; Barkwith, Andrew

    2017-04-01

    The morphodynamics of coast and estuarine environments are known to be sensitive to environmental change and sea-level rise. However, whilst these systems have received considerable individual research attention, how they interact and co-evolve is relatively understudied. These systems are intrinsically linked and it is therefore advantageous to study them holistically in order to build a more comprehensive understanding of their behaviour and to inform sustainable management over the long term. Complex environments such as these are often studied using numerical modelling techniques. Inherent from the limited research in this area, existing models are currently not capable of simulating dynamic coast-estuarine interactions. A new model is being developed through coupling the one-line Coastline Evolution Model (CEM) with CAESAR-Lisflood (C-L), a hydrodynamic Landscape Evolution Model. It is intended that the eventual model be used to advance the understanding of these systems and how they may evolve over the mid to long term in response to climate change. In the UK, the Holderness Coast, Humber Estuary and Spurn Point system offers a diverse and complex case study for this research. Holderness is one of the fastest eroding coastlines in Europe and research suggests that the large volumes of material removed from its cliffs are responsible for the formation of the Spurn Point feature and for the Holocene infilling of the Humber Estuary. Marine, fluvial and coastal processes are continually reshaping this system and over the next century, it is predicted that climate change could lead to increased erosion along the coast and supply of material to the Humber Estuary and Spurn Point. How this manifests will be hugely influential to the future morphology of these systems and the existence of Spurn Point. Progress to date includes a new version of the CEM that has been prepared for integration into C-L and includes an improved graphical user interface and more complex geomorphological processes. Preliminary results from simulations of the Holderness Coast and Spurn Point support findings of other authors, who suggest that changes to the wave climate influences sediment transport patterns in the nearshore zone. The angle of wave approach to the Holderness shows particular significance compared to the height of waves, with an optimum volume of material transported at 42 degrees. Further applications and results of this new model will be presented and discussed.

  9. Research on on-line monitoring technology for steel ball's forming process based on load signal analysis method

    NASA Astrophysics Data System (ADS)

    Li, Ying-jun; Ai, Chang-sheng; Men, Xiu-hua; Zhang, Cheng-liang; Zhang, Qi

    2013-04-01

    This paper presents a novel on-line monitoring technology to obtain forming quality in steel ball's forming process based on load signal analysis method, in order to reveal the bottom die's load characteristic in initial cold heading forging process of steel balls. A mechanical model of the cold header producing process is established and analyzed by using finite element method. The maximum cold heading force is calculated. The results prove that the monitoring on the cold heading process with upsetting force is reasonable and feasible. The forming defects are inflected on the three feature points of the bottom die signals, which are the initial point, infection point, and peak point. A novel PVDF piezoelectric force sensor which is simple on construction and convenient on installation is designed. The sensitivity of the PVDF force sensor is calculated. The characteristics of PVDF force sensor are analyzed by FEM. The PVDF piezoelectric force sensor is fabricated to acquire the actual load signals in the cold heading process, and calibrated by a special device. The measuring system of on-line monitoring is built. The characteristics of the actual signals recognized by learning and identification algorithm are in consistence with simulation results. Identification of actual signals shows that the timing difference values of all feature points for qualified products are not exceed ±6 ms, and amplitude difference values are less than ±3%. The calibration and application experiments show that PVDF force sensor has good static and dynamic performances, and is competent at dynamic measuring on upsetting force. It greatly improves automatic level and machining precision. Equipment capacity factor with damages identification method depends on grade of steel has been improved to 90%.

  10. Clinical, information and business process modeling to promote development of safe and flexible software.

    PubMed

    Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn

    2006-09-01

    Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.

  11. Determination of end point of primary drying in freeze-drying process control.

    PubMed

    Patel, Sajal M; Doen, Takayuki; Pikal, Michael J

    2010-03-01

    Freeze-drying is a relatively expensive process requiring long processing time, and hence one of the key objectives during freeze-drying process development is to minimize the primary drying time, which is the longest of the three steps in freeze-drying. However, increasing the shelf temperature into secondary drying before all of the ice is removed from the product will likely cause collapse or eutectic melt. Thus, from product quality as well as process economics standpoint, it is very critical to detect the end of primary drying. Experiments were conducted with 5% mannitol and 5% sucrose as model systems. The apparent end point of primary drying was determined by comparative pressure measurement (i.e., Pirani vs. MKS Baratron), dew point, Lyotrack (gas plasma spectroscopy), water concentration from tunable diode laser absorption spectroscopy, condenser pressure, pressure rise test (manometric temperature measurement or variations of this method), and product thermocouples. Vials were pulled out from the drying chamber using a sample thief during late primary and early secondary drying to determine percent residual moisture either gravimetrically or by Karl Fischer, and the cake structure was determined visually for melt-back, collapse, and retention of cake structure at the apparent end point of primary drying (i.e., onset, midpoint, and offset). By far, the Pirani is the best choice of the methods tested for evaluation of the end point of primary drying. Also, it is a batch technique, which is cheap, steam sterilizable, and easy to install without requiring any modification to the existing dryer.

  12. D Building FAÇADE Reconstruction Using Handheld Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Sadeghi, F.; Arefi, H.; Fallah, A.; Hahn, M.

    2015-12-01

    3D The three dimensional building modelling has been an interesting topic of research for decades and it seems that photogrammetry methods provide the only economic means to acquire truly 3D city data. According to the enormous developments of 3D building reconstruction with several applications such as navigation system, location based services and urban planning, the need to consider the semantic features (such as windows and doors) becomes more essential than ever, and therefore, a 3D model of buildings as block is not any more sufficient. To reconstruct the façade elements completely, we employed the high density point cloud data that obtained from the handheld laser scanner. The advantage of the handheld laser scanner with capability of direct acquisition of very dense 3D point clouds is that there is no need to derive three dimensional data from multi images using structure from motion techniques. This paper presents a grammar-based algorithm for façade reconstruction using handheld laser scanner data. The proposed method is a combination of bottom-up (data driven) and top-down (model driven) methods in which, at first the façade basic elements are extracted in a bottom-up way and then they are served as pre-knowledge for further processing to complete models especially in occluded and incomplete areas. The first step of data driven modelling is using the conditional RANSAC (RANdom SAmple Consensus) algorithm to detect façade plane in point cloud data and remove noisy objects like trees, pedestrians, traffic signs and poles. Then, the façade planes are divided into three depth layers to detect protrusion, indentation and wall points using density histogram. Due to an inappropriate reflection of laser beams from glasses, the windows appear like holes in point cloud data and therefore, can be distinguished and extracted easily from point cloud comparing to the other façade elements. Next step, is rasterizing the indentation layer that holds the windows and doors information. After rasterization process, the morphological operators are applied in order to remove small irrelevant objects. Next, the horizontal splitting lines are employed to determine floors and vertical splitting lines are employed to detect walls, windows, and doors. The windows, doors and walls elements which are named as terminals are clustered during classification process. Each terminal contains a special property as width. Among terminals, windows and doors are named the geometry tiles in definition of the vocabularies of grammar rules. Higher order structures that inferred by grouping the tiles resulted in the production rules. The rules with three dimensional modelled façade elements constitute formal grammar that is named façade grammar. This grammar holds all the information that is necessary to reconstruct façades in the style of the given building. Thus, it can be used to improve and complete façade reconstruction in areas with no or limited sensor data. Finally, a 3D reconstructed façade model is generated that the accuracy of its geometry size and geometry position depends on the density of the raw point cloud.

  13. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  14. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points - A Review.

    PubMed

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  15. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  16. Analysis and application of opinion model with multiple topic interactions.

    PubMed

    Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng

    2017-08-01

    To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.

  17. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  18. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    USGS Publications Warehouse

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  19. Exploration of Impinging Water Spray Heat Transfer at System Pressures Near the Triple Point

    NASA Technical Reports Server (NTRS)

    Golliher, Eric L.; Yao, Shi-Chune

    2013-01-01

    The heat transfer of a water spray impinging upon a surface in a very low pressure environment is of interest to cooling of space vehicles during launch and re-entry, and to industrial processes where flash evaporation occurs. At very low pressure, the process occurs near the triple point of water, and there exists a transient multiphase transport problem of ice, water and water vapor. At the impingement location, there are three heat transfer mechanisms: evaporation, freezing and sublimation. A preliminary heat transfer model was developed to explore the interaction of these mechanisms at the surface and within the spray.

  20. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

Top