Sample records for point process approach

  1. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  2. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  3. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  4. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  5. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  6. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  7. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  8. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  9. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  10. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  11. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  12. A new approach to criteria for health risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spickett, Jeffery, E-mail: J.Spickett@curtin.edu.au; Faculty of Health Sciences, School of Public Health, Curtin University, Perth, Western Australia; Katscherian, Dianne

    2012-01-15

    Health Impact Assessment (HIA) is a developing component of the overall impact assessment process and as such needs access to procedures that can enable more consistent approaches to the stepwise process that is now generally accepted in both EIA and HIA. The guidelines developed during this project provide a structured process, based on risk assessment procedures which use consequences and likelihood, as a way of ranking risks to adverse health outcomes from activities subjected to HIA or HIA as part of EIA. The aim is to assess the potential for both acute and chronic health outcomes. The consequences component alsomore » identifies a series of consequences for the health care system, depicted as expressions of financial expenditure and the capacity of the health system. These more specific health risk assessment characteristics should provide for a broader consideration of health consequences and a more consistent estimation of the adverse health risks of a proposed development at both the scoping and risk assessment stages of the HIA process. - Highlights: Black-Right-Pointing-Pointer A more objective approach to health risk assessment is provided. Black-Right-Pointing-Pointer An objective set of criteria for the consequences for chronic and acute impacts. Black-Right-Pointing-Pointer An objective set of criteria for the consequences on the health care system. Black-Right-Pointing-Pointer An objective set of criteria for event frequency that could impact on health. Black-Right-Pointing-Pointer The approach presented is currently being trialled in Australia.« less

  13. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  14. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  15. Processing approaches to cognition: the impetus from the levels-of-processing framework.

    PubMed

    Roediger, Henry L; Gallo, David A; Geraci, Lisa

    2002-01-01

    Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.

  16. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  17. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  18. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  19. Communicational Approach to Study Textbook Discourse on the Derivative

    ERIC Educational Resources Information Center

    Park, Jungeun

    2016-01-01

    This paper investigates how three widely used calculus textbooks in the U.S. realize the derivative as a point-specific object and as a function using Sfard's communicational approach. For this purpose, the study analyzed word-use and visual mediators for the "limit process" through which the derivative at a point was objectified, and…

  20. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  1. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  2. A scalable approach for tree segmentation within small-footprint airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-05-01

    This paper presents a distributed approach that scales up to segment tree crowns within a LiDAR point cloud representing an arbitrarily large forested area. The approach uses a single-processor tree segmentation algorithm as a building block in order to process the data delivered in the shape of tiles in parallel. The distributed processing is performed in a master-slave manner, in which the master maintains the global map of the tiles and coordinates the slaves that segment tree crowns within and across the boundaries of the tiles. A minimal bias was introduced to the number of detected trees because of trees lying across the tile boundaries, which was quantified and adjusted for. Theoretical and experimental analyses of the runtime of the approach revealed a near linear speedup. The estimated number of trees categorized by crown class and the associated error margins as well as the height distribution of the detected trees aligned well with field estimations, verifying that the distributed approach works correctly. The approach enables providing information of individual tree locations and point cloud segments for a forest-level area in a timely manner, which can be used to create detailed remotely sensed forest inventories. Although the approach was presented for tree segmentation within LiDAR point clouds, the idea can also be generalized to scale up processing other big spatial datasets.

  3. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D.; Godtliebsen, F.; Rue, H.

    2012-04-01

    Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.

  4. An ArcGIS approach to include tectonic structures in point data regionalization.

    PubMed

    Darsow, Andreas; Schafmeister, Maria-Theresia; Hofmann, Thilo

    2009-01-01

    Point data derived from drilling logs must often be regionalized. However, aquifers may show discontinuous surface structures, such as the offset of an aquitard caused by tectonic faults. One main challenge has been to incorporate these structures into the regionalization process of point data. We combined ordinary kriging and inverse distance weighted (IDW) interpolation to account for neotectonic structures in the regionalization process. The study area chosen to test this approach is the largest porous aquifer in Austria. It consists of three basins formed by neotectonic events and delimited by steep faults with a vertical offset of the aquitard up to 70 m within very short distances. First, ordinary kriging was used to incorporate the characteristic spatial variability of the aquitard location by means of a variogram. The tectonic faults could be included into the regionalization process by using breaklines with buffer zones. All data points inside the buffer were deleted. Last, IDW was performed, resulting in an aquitard map representing the discontinuous surface structures. This approach enables one to account for such surfaces using the standard software package ArcGIS; therefore, it could be adopted in many practical applications.

  5. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  6. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  7. Uncertainty management in intelligent design aiding systems

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Gabbert, Paula S.

    1988-01-01

    A novel approach to uncertainty management which is particularly effective in intelligent design aiding systems for large-scale systems is presented. The use of this approach in the materials handling system design domain is discussed. It is noted that, during any point in the design process, a point value can be obtained for the evaluation of feasible designs; however, the techniques described provide unique solutions for these point values using only the current information about the design environment.

  8. Mean-field approaches to the totally asymmetric exclusion process with quenched disorder and large particles

    NASA Astrophysics Data System (ADS)

    Shaw, Leah B.; Sethna, James P.; Lee, Kelvin H.

    2004-08-01

    The process of protein synthesis in biological systems resembles a one-dimensional driven lattice gas in which the particles (ribosomes) have spatial extent, covering more than one lattice site. Realistic, nonuniform gene sequences lead to quenched disorder in the particle hopping rates. We study the totally asymmetric exclusion process with large particles and quenched disorder via several mean-field approaches and compare the mean-field results with Monte Carlo simulations. Mean-field equations obtained from the literature are found to be reasonably effective in describing this system. A numerical technique is developed for computing the particle current rapidly. The mean-field approach is extended to include two-point correlations between adjacent sites. The two-point results are found to match Monte Carlo simulations more closely.

  9. Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.; Chen, L. C.

    2012-07-01

    Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. Commercial dissemination approaches for solar home systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terrado, E.

    1997-12-01

    The author discusses the issue of providing solar home systems to primarily rural areas from the perspective of how to commercialize the process. He considers two different approaches, one an open market approach and the other an exclusive market approach. He describes examples of the exclusive market approach which are in process in Argentina and Brazil. Coming from a banking background, the business aspects are discussed in detail. He points out the strengths and weaknesses of both approaches toward developing such systems.

  12. On the interpolation of volumetric water content in research catchments

    NASA Astrophysics Data System (ADS)

    Dlamini, Phesheya; Chaplot, Vincent

    Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.

  13. Change in the Embedding Dimension as an Indicator of an Approaching Transition

    PubMed Central

    Neuman, Yair; Marwan, Norbert; Cohen, Yohai

    2014-01-01

    Predicting a transition point in behavioral data should take into account the complexity of the signal being influenced by contextual factors. In this paper, we propose to analyze changes in the embedding dimension as contextual information indicating a proceeding transitive point, called OPtimal Embedding tRANsition Detection (OPERAND). Three texts were processed and translated to time-series of emotional polarity. It was found that changes in the embedding dimension proceeded transition points in the data. These preliminary results encourage further research into changes in the embedding dimension as generic markers of an approaching transition point. PMID:24979691

  14. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  15. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  16. Determination of Steering Wheel Angles during CAR Alignment by Image Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Voegtle, T.

    2016-06-01

    Optical systems for automatic visual inspections are of increasing importance in the field of automation in the industrial domain. A new application is the determination of steering wheel angles during wheel track setting of the final inspection of car manufacturing. The camera has to be positioned outside the car to avoid interruptions of the processes and therefore, oblique images of the steering wheel must be acquired. Three different approaches of computer vision are considered in this paper, i.e. a 2D shape-based matching (by means of a plane to plane rectification of the oblique images and detection of a shape model with a particular rotation), a 3D shape-based matching approach (by means of a series of different perspectives of the spatial shape of the steering wheel derived from a CAD design model) and a point-to-point matching (by means of the extraction of significant elements (e.g. multifunctional buttons) of a steering wheel and a pairwise connection of these points to straight lines). The HALCON system (HALCON, 2016) was used for all software developments and necessary adaptions. As reference a mechanical balance with an accuracy of 0.1° was used. The quality assessment was based on two different approaches, a laboratory test and a test during production process. In the laboratory a standard deviation of ±0.035° (2D shape-based matching), ±0.12° (3D approach) and ±0.029° (point-to-point matching) could be obtained. The field test of 291 measurements (27 cars with varying poses and angles of the steering wheel) results in a detection rate of 100% and ±0.48° (2D matching) and ±0.24° (point-to-point matching). Both methods also fulfil the request of real time processing (three measurements per second).

  17. Estimation of Traffic Variables Using Point Processing Techniques

    DOT National Transportation Integrated Search

    1978-05-01

    An alternative approach to estimating aggregate traffic variables on freeways--spatial mean velocity and density--is presented. Vehicle arrival times at a given location on a roadway, typically a presence detector, are regarded as a point or counting...

  18. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  19. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  20. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  1. Comparison of infrared and 3D digital image correlation techniques applied for mechanical testing of materials

    NASA Astrophysics Data System (ADS)

    Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko

    2015-11-01

    To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.

  2. An Interpretation of the "Science--A Process Approach" Objectives in Terms of Existing Psychological Theory and Experimentation.

    ERIC Educational Resources Information Center

    Cole, Henry P.

    This paper examines the sequence and hierarchy of objectives in the American Association for the Advancement of Science (AAAS) "Science--A Process Approach" curriculum. The work of Piaget, Bruner forms a framework from which the learning objectives and tasks in the AAAS science curriculum are examined. The points of correspondence…

  3. Separation processes using expulsion from dilute supercritical solutions

    DOEpatents

    Cochran, Jr., Henry D.

    1993-01-01

    A process for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.

  4. SU-E-J-108: Solving the Chinese Postman Problem for Effective Contour Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, J; Zhang, L; Balter, P

    2015-06-15

    Purpose: To develop a practical approach for accurate contour deformation when deformable image registration (DIR) is used for atlas-based segmentation or contour propagation in image-guided radiotherapy. Methods: A contour deformation approach was developed on the basis of 3D mesh operations. The 2D contours represented by a series of points in each slice were first converted to a 3D triangular mesh, which was deformed by the deformation vectors resulting from DIR. A set of parallel 2D planes then cut through the deformed 3D mesh, generating unordered points and line segments, which should be reorganized into a set of 2D contour points.more » It was realized that the reorganization problem was equivalent to solving the Chinese Postman Problem (CPP) by traversing a graph built from the unordered points with the least cost. Alternatively, deformation could be applied to a binary mask converted from the original contours. The deformed binary mask was then converted back into contours at the CT slice locations. We performed a qualitative comparison to validate the mesh-based approach against the image-based approach. Results: The DIR could considerably change the 3D mesh, making complicated 2D contour representations after deformation. CPP was able to effectively reorganize the points in 2D planes no matter how complicated the 2D contours were. The mesh-based approach did not require a post-processing of the contour, thus accurately showing the actual deformation in DIR. The mesh-based approach could keep some fine details and resulted in smoother contours than the image-based approach did, especially for the lung structure. Image-based approach appeared to over-process contours and suffered from image resolution limits. The mesh-based approach was integrated into in-house DIR software for use in routine clinic and research. Conclusion: We developed a practical approach for accurate contour deformation. The efficiency of this approach was demonstrated in both clinic and research applications. This work was partially supported by Cancer Prevention & Research Institute of Texas (CPRIT) RP110562.« less

  5. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  6. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  7. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  8. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  9. An Outline of Processability Theory and Its Relationship to Other Approaches to SLA

    ERIC Educational Resources Information Center

    Pienemann, Manfred

    2015-01-01

    In this article I make the point that there has been a continuous focus on second language development in second language acquisition research for over 40 years and that there is clear empirical evidence for generalizable developmental patterns. I will both summarize some of the core assumptions of Processability Theory (PT) as an approach to…

  10. Separation processes using expulsion from dilute supercritical solutions

    DOEpatents

    Cochran, H.D. Jr.

    1993-04-20

    A process is described for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.

  11. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdequin, Marion, E-mail: Marion.Hourdequin@ColoradoCollege.edu; Department of Philosophy, Colorado College, 14 E. Cache La Poudre St., Colorado Springs, CO 80903; Landres, Peter

    Traditional mechanisms for public participation in environmental impact assessment under U.S. federal law have been criticized as ineffective and unable to resolve conflict. As these mechanisms are modified and new approaches developed, we argue that participation should be designed and evaluated not only on practical grounds of cost-effectiveness and efficiency, but also on ethical grounds based on democratic ideals. In this paper, we review and synthesize modern democratic theory to develop and justify four ethical principles for public participation: equal opportunity to participate, equal access to information, genuine deliberation, and shared commitment. We then explore several tensions that are inherentmore » in applying these ethical principles to public participation in EIA. We next examine traditional NEPA processes and newer collaborative approaches in light of these principles. Finally, we explore the circumstances that argue for more in-depth participatory processes. While improved EIA participatory processes do not guarantee improved outcomes in environmental management, processes informed by these four ethical principles derived from democratic theory may lead to increased public engagement and satisfaction with government agency decisions. - Highlights: Black-Right-Pointing-Pointer Four ethical principles based on democratic theory for public participation in EIA. Black-Right-Pointing-Pointer NEPA and collaboration offer different strengths in meeting these principles. Black-Right-Pointing-Pointer We explore tensions inherent in applying these principles. Black-Right-Pointing-Pointer Improved participatory processes may improve public acceptance of agency decisions.« less

  13. A New Approach to Create Image Control Networks in ISIS

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Berry, K. L.; Mapel, J. A.; Walldren, J. C.

    2017-06-01

    A new approach was used to create a feature-based control point network that required the development of new tools in the Integrated Software for Imagers and Spectrometers (ISIS3) system to process very large datasets.

  14. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  15. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  16. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  17. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  18. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  19. Gridless, pattern-driven point cloud completion and extension

    NASA Astrophysics Data System (ADS)

    Gravey, Mathieu; Mariethoz, Gregoire

    2016-04-01

    While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).

  20. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  1. European Scientific Notes. Volume 36, Number 2,

    DTIC Science & Technology

    1982-02-28

    colleagues at creases in process complexity and cost of the the University College of Swansea have con- product . So far, aluminum alloy, steel, and...associated with metal spray processing can stream of metal impinging on a disk rotating at impart to the solidified product . 3,000 to 5,000 rpm is...the point. Indeed, the pilot can simplicity, economy , stand-alone operability, often "fly the point" as the approach continues, portability, and

  2. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    NASA Astrophysics Data System (ADS)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  3. Market-Based Approaches to Managing Science Return from Planetary Missions

    NASA Technical Reports Server (NTRS)

    Wessen, Randii R.; Porter, David; Hanson, Robin

    1996-01-01

    A research plan is described for the design and testing of a method for the planning and negotiation of science observations. The research plan is presented in relation to the fact that the current method, which involves a hierarchical process of science working groups, is unsuitable for the planning of the Cassini mission. The research plan involves the market-based approach in which participants are allocated budgets of scheduling points. The points are used to provide an intensity of preference for the observations being scheduled. In this way, the schedulers do not have to limit themselves to solving major conflicts, but try to maximize the number of scheduling points that result in a conflict-free timeline. Incentives are provided for the participants by the fixed budget concerning their tradeoff decisions. A degree of feedback is provided in the process so that the schedulers may rebid based on the current timeline.

  4. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).

  6. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  7. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  8. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  9. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  10. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  11. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  12. Analysis of Spatial Point Patterns in Nuclear Biology

    PubMed Central

    Weston, David J.; Adams, Niall M.; Russell, Richard A.; Stephens, David A.; Freemont, Paul S.

    2012-01-01

    There is considerable interest in cell biology in determining whether, and to what extent, the spatial arrangement of nuclear objects affects nuclear function. A common approach to address this issue involves analyzing a collection of images produced using some form of fluorescence microscopy. We assume that these images have been successfully pre-processed and a spatial point pattern representation of the objects of interest within the nuclear boundary is available. Typically in these scenarios, the number of objects per nucleus is low, which has consequences on the ability of standard analysis procedures to demonstrate the existence of spatial preference in the pattern. There are broadly two common approaches to look for structure in these spatial point patterns. First a spatial point pattern for each image is analyzed individually, or second a simple normalization is performed and the patterns are aggregated. In this paper we demonstrate using synthetic spatial point patterns drawn from predefined point processes how difficult it is to distinguish a pattern from complete spatial randomness using these techniques and hence how easy it is to miss interesting spatial preferences in the arrangement of nuclear objects. The impact of this problem is also illustrated on data related to the configuration of PML nuclear bodies in mammalian fibroblast cells. PMID:22615822

  13. Inclusive Assessment at Point-of-Design

    ERIC Educational Resources Information Center

    Keating, Neil; Zybutz, Tanya; Rouse, Karl

    2012-01-01

    Recognising assessment as a driver for learning and a key part of the student learning experience, this case study considers the impact of opening the assessment process to active student engagement as well as placing inclusivity at the heart of the assessment task at point-of-design. This case study presents an approach to assessment that both…

  14. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  15. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  16. Five Faces of Cognition: Theoretical Influences on Approaches to Learning Disabilities.

    ERIC Educational Resources Information Center

    Hresko, Wayne P.; Reid, D. Kim

    1988-01-01

    The article points out that the label "cognitive" has been used to designate five substantially different approaches to learning disabilities: information processing, metacognition, genetic epistemology, cognitive behavior modification, and the specific-abilities model. Despite the similar label, the instructional interventions of these approaches…

  17. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  18. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-09-17

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.

  19. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).

  20. Phase II modification of the Water Availability Tool for Environmental Resources (WATER) for Kentucky: The sinkhole-drainage process, point-and-click basin delineation, and results of karst test-basin simulations

    USGS Publications Warehouse

    Taylor, Charles J.; Williamson, Tanja N.; Newson, Jeremy K.; Ulery, Randy L.; Nelson, Hugh L.; Cinotto, Peter J.

    2012-01-01

    This report describes Phase II modifications made to the Water Availability Tool for Environmental Resources (WATER), which applies the process-based TOPMODEL approach to simulate or predict stream discharge in surface basins in the Commonwealth of Kentucky. The previous (Phase I) version of WATER did not provide a means of identifying sinkhole catchments or accounting for the effects of karst (internal) drainage in a TOPMODEL-simulated basin. In the Phase II version of WATER, sinkhole catchments are automatically identified and delineated as internally drained subbasins, and a modified TOPMODEL approach (called the sinkhole drainage process, or SDP-TOPMODEL) is applied that calculates mean daily discharges for the basin based on summed area-weighted contributions from sinkhole drain-age (SD) areas and non-karstic topographically drained (TD) areas. Results obtained using the SDP-TOPMODEL approach were evaluated for 12 karst test basins located in each of the major karst terrains in Kentucky. Visual comparison of simulated hydrographs and flow-duration curves, along with statistical measures applied to the simulated discharge data (bias, correlation, root mean square error, and Nash-Sutcliffe efficiency coefficients), indicate that the SDPOPMODEL approach provides acceptably accurate estimates of discharge for most flow conditions and typically provides more accurate simulation of stream discharge in karstic basins compared to the standard TOPMODEL approach. Additional programming modifications made to the Phase II version of WATER included implementation of a point-and-click graphical user interface (GUI), which fully automates the delineation of simulation-basin boundaries and improves the speed of input-data processing. The Phase II version of WATER enables the user to select a pour point anywhere on a stream reach of interest, and the program will automatically delineate all upstream areas that contribute drainage to that point. This capability enables automatic delineation of a simulation basin of any size (area) and having any level of stream-network complexity. WATER then automatically identifies the presence of sinkholes catchments within the simulation basin boundaries; extracts and compiles the necessary climatic, topographic, and basin characteristics datasets; and runs the SDP-TOPMODEL approach to estimate daily mean discharges (streamflow).

  1. A Systems Approach to Identifying and Managing Opportunities and Constraints to Delivering Innovation Policy for Agriculture: An Analysis of the Australian Cooperative Research Centres (CRC) Program

    ERIC Educational Resources Information Center

    Sandall, Jean; Cooksey, Ray; Wright, Vic

    2011-01-01

    In this paper we outline an analytical approach to identifying points in the policy process where management intervention to adjust organizational design could enhance delivery of innovation policy over time. We illustrate this approach using an example from native vegetation policy in the state of Victoria, Australia. We then use this approach to…

  2. Solving multi-objective optimization problems in conservation with the reference point method

    PubMed Central

    Dujardin, Yann; Chadès, Iadine

    2018-01-01

    Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650

  3. Learning amongst Norwegian Fire-Fighters

    ERIC Educational Resources Information Center

    Sommer, Morten; Nja, Ove

    2011-01-01

    Purpose: The purpose of this study is to reveal and analyse dominant learning processes in emergency response work from the fire-fighters' point of view, and how fire-fighters develop their competence. Design/methodology/approach: This study adopted an explorative approach using participant observation. The objective of this open-minded approach…

  4. A Method to Assess Climate-Relevant Decisions: Application in the Chesapeake Bay (2010 External Review Draft)

    EPA Science Inventory

    The goal of this study is to formalize an approach to inventory and analyze management decisions in order to produce useful information targeted toward effective adaptation to climate change. The approach uses as its starting point ongoing planning processes and decisions geared ...

  5. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  6. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT

    PubMed Central

    Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.

    2011-01-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552

  7. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.

    PubMed

    Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T

    2012-02-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.

  8. Points of View Analysis Revisited: Fitting Multidimensional Structures to Optimal Distance Components with Cluster Restrictions on the Variables.

    ERIC Educational Resources Information Center

    Meulman, Jacqueline J.; Verboon, Peter

    1993-01-01

    Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)

  9. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  10. An Integrated Photogrammetric and Photoclinometric Approach for Pixel-Resolution 3d Modelling of Lunar Surface

    NASA Astrophysics Data System (ADS)

    Liu, W. C.; Wu, B.

    2018-04-01

    High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.

  11. Negotiating the Process of Historical Empathy

    ERIC Educational Resources Information Center

    Endacott, Jason L.

    2014-01-01

    Historical empathy scholarship has evolved to the point where further progress necessitates empirical examinations from a variety of perspectives. Prior studies on historical empathy have largely focused on teachers' pedagogical approach and student outcomes. This qualitative study focuses on students as they engage in the process of historical…

  12. 3D Power Line Extraction from Multiple Aerial Images.

    PubMed

    Oh, Jaehong; Lee, Changno

    2017-09-29

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.

  13. 3D Power Line Extraction from Multiple Aerial Images

    PubMed Central

    Lee, Changno

    2017-01-01

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204

  14. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  15. Steelmaking process control using remote ultraviolet atomic emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Arnold, Samuel

    Steelmaking in North America is a multi-billion dollar industry that has faced tremendous economic and environmental pressure over the past few decades. Fierce competition has driven steel manufacturers to improve process efficiency through the development of real-time sensors to reduce operating costs. In particular, much attention has been focused on end point detection through furnace off gas analysis. Typically, off-gas analysis is done with extractive sampling and gas analyzers such as Non-dispersive Infrared Sensors (NDIR). Passive emission spectroscopy offers a more attractive approach to end point detection as the equipment can be setup remotely. Using high resolution UV spectroscopy and applying sophisticated emission line detection software, a correlation was observed between metal emissions and the process end point during field trials. This correlation indicates a relationship between the metal emissions and the status of a steelmaking melt which can be used to improve overall process efficiency.

  16. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  17. Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kim, Noah; Pandya, Shishir A.

    2012-01-01

    Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.

  18. Promoting Conceptual Development in Physics Teacher Education: Cognitive-Historical Reconstruction of Electromagnetic Induction Law

    ERIC Educational Resources Information Center

    Mantyla, Terhi

    2013-01-01

    In teaching physics, the history of physics offers fruitful starting points for designing instruction. I introduce here an approach that uses historical cognitive processes to enhance the conceptual development of pre-service physics teachers' knowledge. It applies a method called cognitive-historical approach, introduced to the cognitive sciences…

  19. Framing the inborn aging process and longevity science.

    PubMed

    Farrelly, Colin

    2010-06-01

    The medical sciences are currently dominated by the "disease-model" approach to health extension, an approach that prioritizes the study of pathological mechanisms with the goal of discovering treatment modalities for specific diseases. This approach has marginalized research on the aging process itself, research that could lead to an intervention that retards aging, thus conferring health dividends that would far exceed what could be expected by eliminating any specific disease of aging. This paper offers a diagnosis of how this sub-optimal approach to health extension arose and some general prescriptions concerning how progress could be made in terms of adopting a more rational approach to health extension. Drawing on empirical findings from psychology and economics, "prospect theory" is applied to the challenges of "framing" the inborn aging process given the cognitive capacities of real (rather than rational) decision-makers under conditions of risk and uncertainty. Prospect theory reveals that preferences are in fact dependent on whether particular outcomes of a choice are regarded as "a loss" or "a gain", relative to a reference point (or "aspiration level for survival"). And this has significant consequences for the way biogerontologists ought to characterise the central aspirations of the field (i.e. to prevent disease versus extend lifespan). Furthermore, it reveals the importance of shifting the existing reference point of the medical sciences to one that is shaped by the findings of evolutionary biology and biodemography.

  20. An imaging anatomical study on percutaneous kyphoplasty for lumbar via a unilateral transverse process-pedicle approach.

    PubMed

    Wang, Song; Wang, Qing; Kang, Jianping; Xiu, Peng; Wang, Gaoju

    2014-04-20

    An imaging anatomical measurement. To investigate the anatomical feasibility of percutaneous kyphoplasty for lumbar osteoporotic vertebral compression fractures via a unilateral transverse process-pedicle approach (TPA). Kyphoplasty via a unilateral approach has been reported and good clinical results have been achieved. However, because of the lack of an anatomical study, these approaches have yet to be popularized. A total of 150 lumbar vertebral bodies of 30 patients were simulated kyphoplasty on the computed tomographic scans through conventional transpedicle approach (CTA) and the TPA, respectively. Anatomical parameters including the distance between the entry point and the midline of the vertebral body, the puncture inclination angle, and the success rate of puncture were measured and compared. The distance between the entry point and the midline from L1 to L5 lumbar levels varied from 20.6 ± 2.2 mm to 28.6 ± 2.9 mm in the CTA group and from 23.6 ± 2.2 mm to 33.6 ± 2.9 mm in the TPA group. The entry point from L1 to L5 in the TPA group was 3.0 ± 2.1 mm to 5.1 ± 2.7 mm more lateral than that in the CTA group. The medial inclination angles from L1 to L5 were 30.2° ± 6.4° to 47.7° ± 5.4° in the TPA and 15.3° ± 6.0° to 22.8° ± 8.7° in the CTA group. The inclination angles in the TPA group were greater than that in the CTA group and the safe range of the puncture angles was also wider. The success rate was 51.7% in the CTA group and 87.7% in the TPA group. The entry point through a TPA was localized at the midline of the transverse process, 3.0 to 5.1 mm outside the lateral margin of the pedicle projection. Compared with CTA, the puncture inclination angle in the TPA approach was much larger with a wider safe puncture range. The TPA approach allowed an easy puncture to meet or surpass the midline of the lumbar vertebral body. N/A.

  1. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  2. Perturbation Experiments: Approaches for Metabolic Pathway Analysis in Bioreactors.

    PubMed

    Weiner, Michael; Tröndle, Julia; Albermann, Christoph; Sprenger, Georg A; Weuster-Botz, Dirk

    2016-01-01

    In the last decades, targeted metabolic engineering of microbial cells has become one of the major tools in bioprocess design and optimization. For successful application, a detailed knowledge is necessary about the relevant metabolic pathways and their regulation inside the cells. Since in vitro experiments cannot display process conditions and behavior properly, process data about the cells' metabolic state have to be collected in vivo. For this purpose, special techniques and methods are necessary. Therefore, most techniques enabling in vivo characterization of metabolic pathways rely on perturbation experiments, which can be divided into dynamic and steady-state approaches. To avoid any process disturbance, approaches which enable perturbation of cell metabolism in parallel to the continuing production process are reasonable. Furthermore, the fast dynamics of microbial production processes amplifies the need of parallelized data generation. These points motivate the development of a parallelized approach for multiple metabolic perturbation experiments outside the operating production reactor. An appropriate approach for in vivo characterization of metabolic pathways is presented and applied exemplarily to a microbial L-phenylalanine production process on a 15 L-scale.

  3. A graph signal filtering-based approach for detection of different edge types on airborne lidar data

    NASA Astrophysics Data System (ADS)

    Bayram, Eda; Vural, Elif; Alatan, Aydin

    2017-10-01

    Airborne Laser Scanning is a well-known remote sensing technology, which provides a dense and highly accurate, yet unorganized point cloud of earth surface. During the last decade, extracting information from the data generated by airborne LiDAR systems has been addressed by many studies in geo-spatial analysis and urban monitoring applications. However, the processing of LiDAR point clouds is challenging due to their irregular structure and 3D geometry. In this study, we propose a novel framework for the detection of the boundaries of an object or scene captured by LiDAR. Our approach is motivated by edge detection techniques in vision research and it is established on graph signal filtering which is an exciting and promising field of signal processing for irregular data types. Due to the convenient applicability of graph signal processing tools on unstructured point clouds, we achieve the detection of the edge points directly on 3D data by using a graph representation that is constructed exclusively to answer the requirements of the application. Moreover, considering the elevation data as the (graph) signal, we leverage aerial characteristic of the airborne LiDAR data. The proposed method can be employed both for discovering the jump edges on a segmentation problem and for exploring the crease edges on a LiDAR object on a reconstruction/modeling problem, by only adjusting the filter characteristics.

  4. Network analysis reveals stage-specific changes in zebrafish embryo development using time course whole transcriptome profiling and prior biological knowledge.

    PubMed

    Zhang, Yuji

    2015-01-01

    Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly "watch" the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities (e.g., genes and proteins) coordinately perform their biological functions in biological systems. In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1α, 25(OH)2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1α, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among biological processes enriched in co-expressed genes under different conditions. The enriched biological processes include translation elongation, nucleosome assembly, and retina development. These network dynamics provide new insights into the impact of 1α, 25-Dihydroxyvitamin D3 treatment in bone and cartilage development. We developed a network-based approach to analyzing the DEGs at different time points by integrating molecular interactions and gene ontology information. These results demonstrate that the proposed approach can provide insight on the molecular mechanisms taking place in vertebrate embryo development upon treatment with 1α, 25(OH)2D3. Our approach enables the monitoring of biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be easily extended to any temporal- or condition-dependent genomic data analyses.

  5. Open Pit Mine 3d Mapping by Tls and Digital Photogrammetry: 3d Model Update Thanks to a Slam Based Approach

    NASA Astrophysics Data System (ADS)

    Vassena, G.; Clerici, A.

    2018-05-01

    The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.

  6. A Multifaceted Approach to Modernizing NASA's Advanced Multi-Mission Operations System (AMMOS) System Architecture

    NASA Technical Reports Server (NTRS)

    Estefan, Jeff A.; Giovannoni, Brian J.

    2014-01-01

    The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost-effective set of multimission MOS products and services.

  7. The contract process: a methodology for negotiation. Part I.

    PubMed

    Kleinschmidt, W M

    1990-05-01

    This is the first of a three-part series on the contract process for acquiring a hospital information system product. Part One addresses negotiation methodology; points which will facilitate effective negotiation. Part Two will cover contract contents focusing on those topics which must be included in a good contract. Part Three will discuss contract philosophy and contract management; subjects which are critical to the good rapport buyers and vendors want. The adversarial approach to the contract process is not the best approach. Rather, the process should be treated as a step in the building of a partnership and relationship in which both parties win.

  8. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    PubMed

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.

  9. IWRM: What should we teach? A report on curriculum development at the University of the Western Cape, South Africa

    NASA Astrophysics Data System (ADS)

    Jonker, Lewis

    In South Africa, the national government has taken deliberate steps to ensure that tertiary education programmes help to meet societal and economic needs. This article reports on the process of developing a programme in Integrated Water Resources Management at the University of the Western Cape (UWC) that speaks directly to current government policy. It describes two different approaches to curriculum development an eclectic approach that takes as its starting point courses already on offer, and a framework development approach that takes as its starting point identification of particular needs and deliberately builds a curriculum around these needs. The article illustrates how seemingly unrelated policy processes in education and water management could impact on curriculum development. The article suggests that, while curriculum development is a first key step, challenges remain in fine-tuning the IWRM M.Sc. programme so that graduates are equipped with skills that may contribute to equitable and sustainable development in the evolving context of 21st century South Africa.

  10. Image enhancement and color constancy for a vehicle-mounted change detection system

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David

    2016-10-01

    Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

  11. Nitrogen management in landfill leachate: Application of SHARON, ANAMMOX and combined SHARON-ANAMMOX process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sri Shalini, S., E-mail: srishalini10@gmail.com; Joseph, Kurian, E-mail: kuttiani@gmail.com

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Significant research on ammonia removal from leachate by SHARON and ANAMMOX process. Black-Right-Pointing-Pointer Operational parameters, microbiology, biochemistry and application of the process. Black-Right-Pointing-Pointer SHARON-ANAMMOX process for leachate a new research and this paper gives wide facts. Black-Right-Pointing-Pointer Cost-effective process, alternative to existing technologies for leachate treatment. Black-Right-Pointing-Pointer Address the issues and operational conditions for application in leachate treatment. - Abstract: In today's context of waste management, landfilling of Municipal Solid Waste (MSW) is considered to be one of the standard practices worldwide. Leachate generated from municipal landfills has become a great threat to the surroundings as it containsmore » high concentration of organics, ammonia and other toxic pollutants. Emphasis has to be placed on the removal of ammonia nitrogen in particular, derived from the nitrogen content of the MSW and it is a long term pollution problem in landfills which determines when the landfill can be considered stable. Several biological processes are available for the removal of ammonia but novel processes such as the Single Reactor System for High Activity Ammonia Removal over Nitrite (SHARON) and Anaerobic Ammonium Oxidation (ANAMMOX) process have great potential and several advantages over conventional processes. The combined SHARON-ANAMMOX process for municipal landfill leachate treatment is a new, innovative and significant approach that requires more research to identify and solve critical issues. This review addresses the operational parameters, microbiology, biochemistry and application of both the processes to remove ammonia from leachate.« less

  12. Conceptualizing Stakeholders' Perceptions of Ecosystem Services: A Participatory Systems Mapping Approach

    NASA Astrophysics Data System (ADS)

    Lopes, Rita; Videira, Nuno

    2015-12-01

    A participatory system dynamics modelling approach is advanced to support conceptualization of feedback processes underlying ecosystem services and to foster a shared understanding of leverage intervention points. The process includes systems mapping workshop and follow-up tasks aiming at the collaborative construction of causal loop diagrams. A case study developed in a natural area in Portugal illustrates how a stakeholder group was actively engaged in the development of a conceptual model depicting policies for sustaining the climate regulation ecosystem service.

  13. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  14. New Experiments and a Model-Driven Approach for Interpreting Middle Stone Age Lithic Point Function Using the Edge Damage Distribution Method.

    PubMed

    Schoville, Benjamin J; Brown, Kyle S; Harris, Jacob A; Wilkins, Jayne

    2016-01-01

    The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages-Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed.

  15. New Experiments and a Model-Driven Approach for Interpreting Middle Stone Age Lithic Point Function Using the Edge Damage Distribution Method

    PubMed Central

    Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne

    2016-01-01

    The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886

  16. Simulators for Mariner Training and Licensing. Phase 2: Investigation of Simulator-Based Training for Maritime Cadets,

    DTIC Science & Technology

    1981-10-01

    a balance was drawn between experimental considerations (e.g., pretests and posttests ) and training process considerations (e.g., available time and...Station 23 4 Instructor’s Checkoff List 24 5 Port Approach Area 26 6 Training Unit Schedule 28 7 Pretest / Posttest Comparison: CPA - Kings Point Group A (Day...39 a Pretest / Posttest Comparison: Number of Radar Requests - Kings Point Group A (Day) 41 9 Input Characteristic Range Master Notified, Kings Point

  17. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  18. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  19. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  20. Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2017-12-01

    An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.

  1. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  2. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  3. SPY: A new scission point model based on microscopic ingredients to predict fission fragments properties

    NASA Astrophysics Data System (ADS)

    Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

    2013-12-01

    Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

  4. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  5. Genetic interaction analysis of point mutations enables interrogation of gene function at a residue-level resolution

    PubMed Central

    Braberg, Hannes; Moehle, Erica A.; Shales, Michael; Guthrie, Christine; Krogan, Nevan J.

    2014-01-01

    We have achieved a residue-level resolution of genetic interaction mapping – a technique that measures how the function of one gene is affected by the alteration of a second gene – by analyzing point mutations. Here, we describe how to interpret point mutant genetic interactions, and outline key applications for the approach, including interrogation of protein interaction interfaces and active sites, and examination of post-translational modifications. Genetic interaction analysis has proven effective for characterizing cellular processes; however, to date, systematic high-throughput genetic interaction screens have relied on gene deletions or knockdowns, which limits the resolution of gene function analysis and poses problems for multifunctional genes. Our point mutant approach addresses these issues, and further provides a tool for in vivo structure-function analysis that complements traditional biophysical methods. We also discuss the potential for genetic interaction mapping of point mutations in human cells and its application to personalized medicine. PMID:24842270

  6. Comparison of Point Matching Techniques for Road Network Matching

    NASA Astrophysics Data System (ADS)

    Hackeloeer, A.; Klasing, K.; Krisp, J. M.; Meng, L.

    2013-05-01

    Map conflation investigates the unique identification of geographical entities across different maps depicting the same geographic region. It involves a matching process which aims to find commonalities between geographic features. A specific subdomain of conflation called Road Network Matching establishes correspondences between road networks of different maps on multiple layers of abstraction, ranging from elementary point locations to high-level structures such as road segments or even subgraphs derived from the induced graph of a road network. The process of identifying points located on different maps by means of geometrical, topological and semantical information is called point matching. This paper provides an overview of various techniques for point matching, which is a fundamental requirement for subsequent matching steps focusing on complex high-level entities in geospatial networks. Common point matching approaches as well as certain combinations of these are described, classified and evaluated. Furthermore, a novel similarity metric called the Exact Angular Index is introduced, which considers both topological and geometrical aspects. The results offer a basis for further research on a bottom-up matching process for complex map features, which must rely upon findings derived from suitable point matching algorithms. In the context of Road Network Matching, reliable point matches provide an immediate starting point for finding matches between line segments describing the geometry and topology of road networks, which may in turn be used for performing a structural high-level matching on the network level.

  7. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  8. Sound Foundations: Organic Approaches to Learning Notation in Beginning Band

    ERIC Educational Resources Information Center

    West, Chad

    2016-01-01

    By starting with a foundation of sound before sight, we can help our students learn notation organically in a way that honors the natural process. This article describes five organic approaches to learning notation in beginning band: (1) iconic notation, (2) point and play, (3) student lead-sheet, (4) modeling, and (5) kid dictation. While…

  9. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  10. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  11. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  12. Holistic Approach as Viewed by the Basic School Teachers in Latvia

    ERIC Educational Resources Information Center

    Badjanova, Jelena; Iliško, Dzintra

    2015-01-01

    The article points to new competencies required from basic school teachers, reinforced by the reform processes in the educational system in Latvia, the quality assurance of educational process, and modernisation and critical re-evaluation of educational materials and standards. The authors view sustainability as an integral part of reform…

  13. Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods

    PubMed Central

    Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.

    2017-01-01

    The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537

  14. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  15. Drive Innovation Faster.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemon, Douglas K.

    2005-08-01

    Authored article by Doug Lemon. An expert opinion/thought piece on how PNNL approaches R&D projects, incorporating IP protection earlier in the process of innovation to shorten the development timeline. Article cites example of SMART program to illustrate point.

  16. The detection and analysis of point processes in biological signals

    NASA Technical Reports Server (NTRS)

    Anderson, D. J.; Correia, M. J.

    1977-01-01

    A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.

  17. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  18. Implementation of Steiner point of fuzzy set.

    PubMed

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  19. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  20. Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation

    NASA Astrophysics Data System (ADS)

    Stephenson, Jerry L.; Kapraun, Chris

    Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.

  1. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  2. Holographic video at 40 frames per second for 4-million object points.

    PubMed

    Tsang, Peter; Cheung, W-K; Poon, T-C; Zhou, C

    2011-08-01

    We propose a fast method for generating digital Fresnel holograms based on an interpolated wavefront-recording plane (IWRP) approach. Our method can be divided into two stages. First, a small, virtual IWRP is derived in a computational-free manner. Second, the IWRP is expanded into a Fresnel hologram with a pair of fast Fourier transform processes, which are realized with the graphic processing unit (GPU). We demonstrate state-of-the-art experimental results, capable of generating a 2048 x 2048 Fresnel hologram of around 4 × 10(6) object points at a rate of over 40 frames per second.

  3. Extraction of Features from High-resolution 3D LiDaR Point-cloud Data

    NASA Astrophysics Data System (ADS)

    Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.

    2008-12-01

    Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the GeoEarthScope B-4 project on a section of the San Andreas Fault (Segment SA09). This section provides an excellent test site for our method as it exposes the fault clearly, contains few extraneous structures, and exhibits multiple dry stream-beds that have been off-set by motion on the fault.

  4. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination

    PubMed Central

    Fasano, Giancarmine; Grassi, Michele

    2017-01-01

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651

  5. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-09-24

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

  6. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  7. Si amorphization by focused ion beam milling: Point defect model with dynamic BCA simulation and experimental validation.

    PubMed

    Huang, J; Loeffler, M; Muehle, U; Moeller, W; Mulders, J J L; Kwakman, L F Tz; Van Dorp, W F; Zschech, E

    2018-01-01

    A Ga focused ion beam (FIB) is often used in transmission electron microscopy (TEM) analysis sample preparation. In case of a crystalline Si sample, an amorphous near-surface layer is formed by the FIB process. In order to optimize the FIB recipe by minimizing the amorphization, it is important to predict the amorphous layer thickness from simulation. Molecular Dynamics (MD) simulation has been used to describe the amorphization, however, it is limited by computational power for a realistic FIB process simulation. On the other hand, Binary Collision Approximation (BCA) simulation is able and has been used to simulate ion-solid interaction process at a realistic scale. In this study, a Point Defect Density approach is introduced to a dynamic BCA simulation, considering dynamic ion-solid interactions. We used this method to predict the c-Si amorphization caused by FIB milling on Si. To validate the method, dedicated TEM studies are performed. It shows that the amorphous layer thickness predicted by the numerical simulation is consistent with the experimental data. In summary, the thickness of the near-surface Si amorphization layer caused by FIB milling can be well predicted using the Point Defect Density approach within the dynamic BCA model. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  9. Serotonin Mediates a Learned Increase in Attraction to High Concentrations of Benzaldehyde in Aged "C. elegans"

    ERIC Educational Resources Information Center

    Tsui, David; van der Kooy, Derek

    2008-01-01

    We utilized olfactory-mediated chemotaxis in "Caenorhabditis elegans" to examine the effect of aging on information processing and animal behavior. Wild-type (N2) young adults (day 4) initially approach and eventually avoid a point source of benzaldehyde. Aged adult animals (day 7) showed a stronger initial approach and a delayed avoidance to…

  10. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  11. Using Deming To Improve Quality in Colleges and Universities.

    ERIC Educational Resources Information Center

    Cornesky, Robert A.; And Others

    Of all the people known for stressing quality in industry, W. Edwards Deming is the pioneer. He stresses statistical process control (SPC) and a 14-point process for managers to improve quality and productivity. His approach is humanistic and treats people as intelligent human beings who want to do a good job. Twelve administrators in a university…

  12. Interactions between Individual Differences, Treatments, and Structures in SLA

    ERIC Educational Resources Information Center

    DeKeyser, Robert

    2012-01-01

    For decades educational psychologists have bemoaned the black box approach of much research on learning, that is, the focus on product rather than process, and the absence of fine-grained analysis of the learning process in the individual. One way that progress has been made on this point in the last couple of decades is through cognitive…

  13. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of points (i) at a maximum distance (ii) around each core-point. Under this condition, seed points are said to be density-reachable by a core point delimiting a cluster around it. A chain of intermediate seed-points can connect contiguous clusters allowing clusters of arbitrary shape to be defined. The novelty of the proposed approach consists in the implementation of the DBSCAN 3D-module, where the xyz-coordinates identify each point and the density of points within a sphere is considered. This allows detecting volumetric features with a higher accuracy, depending only on actual sampling resolution. The approach is truly 3D and exploits all TLS measurements without the need of interpolation or data reduction. Using this method, enhanced geomorphological activity during the summer of 2015 in respect to the previous two years was observed. We attribute this result to the exceptionally high temperatures of that summer, which we deem responsible for accelerating the melting process at the rock glacier front and probably also increasing creep velocities. References: - Tonini, M. and Abellan, A. (2014). Rockfall detection from terrestrial LiDAR point clouds: A clustering approach using R. Journal of Spatial Information Sciences. Number 8, pp95-110 - Hennig, C. Package fpc: Flexible procedures for clustering. https://cran.r-project.org/web/packages/fpc/index.html, 2015. Accessed 2016-01-12.

  14. Management of reforming of housing-and-communal services

    NASA Astrophysics Data System (ADS)

    Skripnik, Oksana

    2017-10-01

    The international experience of reforming of housing and communal services is considered. The main scientific and methodical approaches of system transformation of the housing sphere are analyzed in the article. The main models of reforming are pointed out, interaction of participants of structural change process from the point of view of their commercial and social importance is characterized, advantages and shortcomings are revealed, model elements of the reform transformations from the point of view of the formation of investment appeal, competitiveness, energy efficiency and social importance of the carried-out actions are allocated.

  15. Low-cost manufacturing of the point focus concentrating module and its key component, the Fresnel lens. Final subcontract report, 31 January 1991--6 May 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saifee, T.; Konnerth, A. III

    1991-11-01

    Solar Kinetics, Inc. (SKI) has been developing point-focus concentrating PV modules since 1986. SKI is currently in position to manufacture between 200 to 600 kilowatts annually of the current design by a combination of manual and semi-automated methods. This report reviews the current status of module manufacture and specifies the required approach to achieve a high-volume manufacturing capability and low cost. The approach taken will include process development concurrent with module design for automated manufacturing. The current effort reviews the major manufacturing costs and identifies components and processes whose improvements would produce the greatest effect on manufacturability and cost reduction.more » The Fresnel lens is one such key component. Investigating specific alternative manufacturing methods and sources has substantially reduced the lens costs and has exceeded the DOE cost-reduction goals. 15 refs.« less

  16. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  17. Annual Statement of Assurance. Fiscal Year 1991

    DTIC Science & Technology

    1991-01-01

    risk) assessment rating process. In orientation sessions, we made a pointed effort to emphasize the statutory root of the IMC Program. We undertook... rating process. Training in the development of the management control plan was also provided using examples and a sample format. To illustrate the...application of IMC principles to the risk rating process, a case study approach was presented in a workshop mode with questions and answers following

  18. Increased importance of the documented development stage in process validation.

    PubMed

    Mohammed-Ziegler, Ildikó; Medgyesi, Ildikó

    2012-07-01

    Current trends in pharmaceutical quality assurance moved when the Federal Drug Administration (FDA) of the USA published its new guideline on process validation in 2011. This guidance introduced the lifecycle approach of process validation. In this short communication some typical changes from the point of view of practice of API production are addressed in the light of inspection experiences. Some details are compared with the European regulations.

  19. Riverbed clogging associated with a California riverbank filtration system: An assessment of mechanisms and monitoring approaches

    USGS Publications Warehouse

    Ulrich, Craig; Hubbard, Susan S.; Florsheim, Joan; Rosenberry, Donald O.; Borglin, Sharon; Trotta, Marcus; Seymour, Donald

    2015-01-01

    An experimental field study was performed to investigate riverbed clogging processes and associated monitoring approaches near a dam-controlled riverbank filtration facility in Northern California. Motivated by previous studies at the site that indicated riverbed clogging plays an important role in the performance of the riverbank filtration system, we investigated the spatiotemporal variability and nature of the clogging. In particular, we investigated whether the clogging was due to abiotic or biotic mechanisms. A secondary aspect of the study was the testing of different methods to monitor riverbed clogging and related processes, such as seepage. Monitoring was conducted using both point-based approaches and spatially extensive geophysical approaches, including: grain-size analysis, temperature sensing, electrical resistivity tomography, seepage meters, microbial analysis, and cryocoring, along two transects. The point monitoring measurements suggested a substantial increase in riverbed biomass (2 orders of magnitude) after the dam was raised compared to the small increase (∼2%) in fine-grained sediment. These changes were concomitant with decreased seepage. The decreased seepage eventually led to the development of an unsaturated zone beneath the riverbed, which further decreased infiltration capacity. Comparison of our time-lapse grain-size and biomass datasets suggested that biotic processes played a greater role in clogging than did abiotic processes. Cryocoring and autonomous temperature loggers were most useful for locally monitoring clogging agents, while electrical resistivity data were useful for interpreting the spatial extent of a pumping-induced unsaturated zone that developed beneath the riverbed after riverbed clogging was initiated. The improved understanding of spatiotemporally variable riverbed clogging and monitoring approaches is expected to be useful for optimizing the riverbank filtration system operations.

  20. Chatter detection in turning using persistent homology

    NASA Astrophysics Data System (ADS)

    Khasawneh, Firas A.; Munch, Elizabeth

    2016-03-01

    This paper describes a new approach for ascertaining the stability of stochastic dynamical systems in their parameter space by examining their time series using topological data analysis (TDA). We illustrate the approach using a nonlinear delayed model that describes the tool oscillations due to self-excited vibrations in turning. Each time series is generated using the Euler-Maruyama method and a corresponding point cloud is obtained using the Takens embedding. The point cloud can then be analyzed using a tool from TDA known as persistent homology. The results of this study show that the described approach can be used for analyzing datasets of delay dynamical systems generated both from numerical simulation and experimental data. The contributions of this paper include presenting for the first time a topological approach for investigating the stability of a class of nonlinear stochastic delay equations, and introducing a new application of TDA to machining processes.

  1. Systematic identification of an integrative network module during senescence from time-series gene expression.

    PubMed

    Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul

    2017-03-15

    Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.

  2. A Process of Multidisciplinary Team Communication to Individualize Stroke Rehabilitation of an 84-Year-Old Stroke Patient.

    PubMed

    Hiragami, Fukumi; Hiragami, Shogo; Suzuki, Yasuo

    Previously, we have used a multidisciplinary team (MDT) approach to individualize rehabilitation of very old stroke patients as a means to establish intervention points for addressing impaired activities of daily living (ADL). However, this previous study was limited because of a lack in describing the communication process over time. This case study characterized the MDT communication process in the rehabilitation of an 84-year-old patient over the course of 15 weeks. The MDT consisted of 3 nurses, 1 doctor, 6 therapists, and the patient/families. Meetings (15 minutes each) were held at 4, 6, 8, and 15 weeks following the patient's admission. To individualize the rehabilitation, the communication process involved gaining knowledge about ADL impairments, sharing assessments, providing treatment options, and reflecting on desired treatment outcomes-a process termed KATR. The knowledge, assessment, treatment, and reflection (KATR) process established intervention points focusing on specific ADL impairments. The team members focused the interventions on the impaired ADL identified in the KATR process, and individualized rehabilitation was generated from the MDT information-sharing knowledge. In the initial meeting (Week 4), intervention points derived from the KATR process focused on rehabilitation of self-care impairments. These impairments improved by Week 15. By the last meeting, the MDT intervention points focused on mobility impairments. Having an organized communication process (i.e., KATR) facilitates individualization of rehabilitation without lengthy and frequent MDT meetings and enhances the quality of rehabilitation after a stroke.

  3. Advanced mobility handover for mobile IPv6 based wireless networks.

    PubMed

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches.

  4. The Impact of Mutation and Gene Conversion on the Local Diversification of Antigen Genes in African Trypanosomes

    PubMed Central

    Gjini, Erida; Haydon, Daniel T.; Barry, J. David; Cobbold, Christina A.

    2012-01-01

    Patterns of genetic diversity in parasite antigen gene families hold important information about their potential to generate antigenic variation within and between hosts. The evolution of such gene families is typically driven by gene duplication, followed by point mutation and gene conversion. There is great interest in estimating the rates of these processes from molecular sequences for understanding the evolution of the pathogen and its significance for infection processes. In this study, a series of models are constructed to investigate hypotheses about the nucleotide diversity patterns between closely related gene sequences from the antigen gene archive of the African trypanosome, the protozoan parasite causative of human sleeping sickness in Equatorial Africa. We use a hidden Markov model approach to identify two scales of diversification: clustering of sequence mismatches, a putative indicator of gene conversion events with other lower-identity donor genes in the archive, and at a sparser scale, isolated mismatches, likely arising from independent point mutations. In addition to quantifying the respective probabilities of occurrence of these two processes, our approach yields estimates for the gene conversion tract length distribution and the average diversity contributed locally by conversion events. Model fitting is conducted using a Bayesian framework. We find that diversifying gene conversion events with lower-identity partners occur at least five times less frequently than point mutations on variant surface glycoprotein (VSG) pairs, and the average imported conversion tract is between 14 and 25 nucleotides long. However, because of the high diversity introduced by gene conversion, the two processes have almost equal impact on the per-nucleotide rate of sequence diversification between VSG subfamily members. We are able to disentangle the most likely locations of point mutations and conversions on each aligned gene pair. PMID:22735079

  5. Data handling and representation of freeform surfaces

    NASA Astrophysics Data System (ADS)

    Steinkopf, Ralf; Dick, Lars; Kopf, Tino; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona

    2011-10-01

    Freeform surfaces enable innovative optics. They are not limited by axis symmetry and hence they are almost free in design. They are used to reduce the installation space and enhance the performance of optical elements. State of the art optical design tools are computing with powerful algorithms to simulate freeform surfaces. Even new mathematical approaches are under development /1/. In consequence, new optical designs /2/ are pushing the development of manufacturing processes consequently and novel types of datasets have to proceed through the process chain /3/. The complexity of these data is the huge challenge for the data handling. Because of the asymmetrical and 3-dimensional surfaces of freeforms, large data volumes have to be created, trimmed, extended and fitted. All these processes must be performed without losing the accuracy of the original design data. Additionally, manifold types of geometries results in different kinds of mathematical representations of freeform surfaces and furthermore the used CAD/CAM tools are dealing with a set of spatial transport formats. These are all reasons why manufacture-oriented approaches for the freeform data handling are not yet sufficiently developed. This paper suggests a classification of freeform surfaces based on the manufacturing methods which are offered by diamond machining. The different manufacturing technologies, ranging from servo-turning to shaping, require a differentiated approach for the data handling process. The usage of analytical descriptions in form of splines and polynomials as well as the application of discrete descriptions like point clouds is shown in relation to the previously made classification. Advantages and disadvantages of freeform representations are discussed. Aspects of the data handling in between different process steps are pointed out and suitable exchange formats for freeform data are proposed. The described approach offers the possibility for efficient data handling from optical design to systems in novel optics.

  6. Designing and Implementing a Constructivist Chemistry Laboratory Program.

    ERIC Educational Resources Information Center

    Blakely, Alan

    2000-01-01

    Describes a constructivist chemistry laboratory approach based on students' personal experiences where students had the opportunity to develop their own experimental processes. Points out both the fruitfulness and difficulties of using a graduate student as a teaching assistant. (YDS)

  7. Geometric identification and damage detection of structural elements by terrestrial laser scanner

    NASA Astrophysics Data System (ADS)

    Hou, Tsung-Chin; Liu, Yu-Wei; Su, Yu-Min

    2016-04-01

    In recent years, three-dimensional (3D) terrestrial laser scanning technologies with higher precision and higher capability are developing rapidly. The growing maturity of laser scanning has gradually approached the required precision as those have been provided by traditional structural monitoring technologies. Together with widely available fast computation for massive point cloud data processing, 3D laser scanning can serve as an efficient structural monitoring alternative for civil engineering communities. Currently most research efforts have focused on integrating/calculating the measured multi-station point cloud data, as well as modeling/establishing the 3D meshes of the scanned objects. Very little attention has been spent on extracting the information related to health conditions and mechanical states of structures. In this study, an automated numerical approach that integrates various existing algorithms for geometric identification and damage detection of structural elements were established. Specifically, adaptive meshes were employed for classifying the point cloud data of the structural elements, and detecting the associated damages from the calculated eigenvalues in each area of the structural element. Furthermore, kd-tree was used to enhance the searching efficiency of plane fitting which were later used for identifying the boundaries of structural elements. The results of geometric identification were compared with M3C2 algorithm provided by CloudCompare, as well as validated by LVDT measurements of full-scale reinforced concrete beams tested in laboratory. It shows that 3D laser scanning, through the established processing approaches of the point cloud data, can offer a rapid, nondestructive, remote, and accurate solution for geometric identification and damage detection of structural elements.

  8. Firmware Development Improves System Efficiency

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1993-01-01

    Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.

  9. A potential-of-mean-force approach for fracture mechanics of heterogeneous materials using the lattice element method

    NASA Astrophysics Data System (ADS)

    Laubie, Hadrien; Radjaï, Farhang; Pellenq, Roland; Ulm, Franz-Josef

    2017-08-01

    Fracture of heterogeneous materials has emerged as a critical issue in many engineering applications, ranging from subsurface energy to biomedical applications, and requires a rational framework that allows linking local fracture processes with global fracture descriptors such as the energy release rate, fracture energy and fracture toughness. This is achieved here by means of a local and a global potential-of-mean-force (PMF) inspired Lattice Element Method (LEM) approach. In the local approach, fracture-strength criteria derived from the effective interaction potentials between mass points are shown to exhibit a scaling commensurable with the energy dissipation of fracture processes. In the global PMF-approach, fracture is considered as a sequence of equilibrium states associated with minimum potential energy states analogous to Griffith's approach. It is found that this global approach has much in common with a Grand Canonical Monte Carlo (GCMC) approach, in which mass points are randomly removed following a maximum dissipation criterion until the energy release rate reaches the fracture energy. The duality of the two approaches is illustrated through the application of the PMF-inspired LEM for fracture propagation in a homogeneous linear elastic solid using different means of evaluating the energy release rate. Finally, by application of the method to a textbook example of fracture propagation in a heterogeneous material, it is shown that the proposed PMF-inspired LEM approach captures some well-known toughening mechanisms related to fracture energy contrast, elasticity contrast and crack deflection in the considered two-phase layered composite material.

  10. Improving the quality of extracting dynamics from interspike intervals via a resampling approach

    NASA Astrophysics Data System (ADS)

    Pavlova, O. N.; Pavlov, A. N.

    2018-04-01

    We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.

  11. Response to the Point of View of Gregory B. Pauly, David M. Hillis, and David C. Cannatella, by the Anuran Subcommittee of the SSAR/HL/ASIH Scientific and Standard English Names List

    USGS Publications Warehouse

    Frost, Darrel R.; McDiarmid, Roy W.; Mendelson, Joseph R.

    2009-01-01

    The Point of View by Gregory Pauly, David Hillis, and David Cannatella misrepresents the motives and activities of the anuran subcommittee of the Scientific and Standard English Names Committee, contains a number of misleading statements, omits evidence and references to critical literature that have already rejected or superseded their positions, and cloaks the limitations of their nomenclatural approach in ambiguous language. Their Point of View is not about promoting transparency in the process of constructing the English Names list, assuring that its taxonomy is adequately reviewed, or promoting nomenclatural stability in any global sense. Rather, their Point of View focuses in large part on a single publication, The Amphibian Tree of Life, which is formally unrelated to the Standard English Names List, and promotes an approach to nomenclature mistakenly asserted by them to be compatible with both the International Code of Zoological Nomenclature and one of its competitors, the PhyloCode.

  12. Flooding the Zone: A Ten-Point Approach to Assessing Critical Thinking as Part of the AACSB Accreditation Process

    ERIC Educational Resources Information Center

    Cavaliere, Frank; Mayer, Bradley W.

    2012-01-01

    Undergoing the accreditation process of the Association to Advance Collegiate Schools of Business (AACSB) can be quite daunting and stressful. It requires prodigious amounts of planning, record-keeping, and document preparation. It is not something that can be thrown together at the last minute. The same is true of the five-year reaccreditation…

  13. Distributed medical intelligence. A systems approach for developing and integrative health care information distribution infrastructure.

    PubMed

    Warner, D; Sale, J; Viirre, E

    1996-01-01

    Recent trends in healthcare informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. Distributed Medical Intelligence promotes the development of an integrative medical communication system which addresses the process of providing expert medical knowledge to the point of need.

  14. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  15. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity.

    PubMed

    Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G

    2015-01-01

    Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

  16. Process and apparatus for preparing textured crystalline materials using anisotropy in the paramagnetic susceptibility

    DOEpatents

    Holloway, Aleksey

    1992-01-07

    The present invention discloses a process and apparatus for forming textures in materials. The process comprises heating a material having an anisotropy in the paramagnetic or diamagnetic susceptibility within a magnetic field. The material is heated to a temperature approaching its melting point while a magnetic field of at least 10.sup.4 Oe is simultaneously applied. The process and apparatus produce highly textured bulk and elongated materials with high current densities below critical superconducting temperatures.

  17. Process and apparatus for preparing textured crystalline materials using anisotropy in the paramagnetic susceptibility

    DOEpatents

    Holloway, A.

    1992-01-07

    The present invention discloses a process and apparatus for forming textures in materials. The process comprises heating a material having an anisotropy in the paramagnetic or diamagnetic susceptibility within a magnetic field. The material is heated to a temperature approaching its melting point while a magnetic field of at least 10[sup 4]Oe is simultaneously applied. The process and apparatus produce highly textured bulk and elongated materials with high current densities below critical superconducting temperatures. 6 figs.

  18. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  19. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  20. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  1. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    NASA Astrophysics Data System (ADS)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  2. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  3. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  4. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Babak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design (CED) teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a 'next-generation CED; in addition to a point design, the Team develops a model of the local trade space. The process is a balance between the power of a model developing tools and the creativity of humal experts, enabling the development of a variety of trade models for any space mission. This paper reviews the modeling method and its practical implementation in the ED environment. Example results illustrate the benefit of this approach.

  5. Experience and convergence in spiritual direction.

    PubMed

    Evans, Jean

    2015-02-01

    The practice of spiritual direction concerns the human experience of God. As praxis, spiritual direction has a long tradition in Western Christianity. It is a process rooted in spirituality with theology as its foundation. This paper explores the convergences between aspects of philosophy (contemplative awareness), psychology (Rogerian client-centered approach) and phenomenology. There are significant points of convergence between phenomenology and spiritual direction: first, in Ignatius of Loyola's phenomenological approach to his religious experience; second, in the appropriation by spiritual directors of concepts of epochē and empathy; third, in the process of "unpacking" religious experience within a spiritual direction interview.

  6. Temporal Genetic Modifications after Controlled Cortical Impact—Understanding Traumatic Brain Injury through a Systematic Network Approach

    PubMed Central

    Wong, Yung-Hao; Wu, Chia-Chou; Wu, John Chung-Che; Lai, Hsien-Yong; Chen, Kai-Yun; Jheng, Bo-Ren; Chen, Mien-Cheng; Chang, Tzu-Hao; Chen, Bor-Sen

    2016-01-01

    Traumatic brain injury (TBI) is a primary injury caused by external physical force and also a secondary injury caused by biological processes such as metabolic, cellular, and other molecular events that eventually lead to brain cell death, tissue and nerve damage, and atrophy. It is a common disease process (as opposed to an event) that causes disabilities and high death rates. In order to treat all the repercussions of this injury, treatment becomes increasingly complex and difficult throughout the evolution of a TBI. Using high-throughput microarray data, we developed a systems biology approach to explore potential molecular mechanisms at four time points post-TBI (4, 8, 24, and 72 h), using a controlled cortical impact (CCI) model. We identified 27, 50, 48, and 59 significant proteins as network biomarkers at these four time points, respectively. We present their network structures to illustrate the protein–protein interactions (PPIs). We also identified UBC (Ubiquitin C), SUMO1, CDKN1A (cyclindependent kinase inhibitor 1A), and MYC as the core network biomarkers at the four time points, respectively. Using the functional analytical tool MetaCore™, we explored regulatory mechanisms and biological processes and conducted a statistical analysis of the four networks. The analytical results support some recent findings regarding TBI and provide additional guidance and directions for future research. PMID:26861311

  7. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  8. Profitability Analysis of Soybean Oil Processes.

    PubMed

    Cheng, Ming-Hsun; Rosentrater, Kurt A

    2017-10-07

    Soybean oil production is the basic process for soybean applications. Cash flow analysis is used to estimate the profitability of a manufacturing venture. Besides capital investments, operating costs, and revenues, the interest rate is the factor to estimate the net present value (NPV), break-even points, and payback time; which are benchmarks for profitability evaluation. The positive NPV and reasonable payback time represent a profitable process, and provide an acceptable projection for real operating. Additionally, the capacity of the process is another critical factor. The extruding-expelling process and hexane extraction are the two typical approaches used in industry. When the capacities of annual oil production are larger than 12 and 173 million kg respectively, these two processes are profitable. The solvent free approach, known as enzyme assisted aqueous extraction process (EAEP), is profitable when the capacity is larger than 17 million kg of annual oil production.

  9. Profitability Analysis of Soybean Oil Processes

    PubMed Central

    2017-01-01

    Soybean oil production is the basic process for soybean applications. Cash flow analysis is used to estimate the profitability of a manufacturing venture. Besides capital investments, operating costs, and revenues, the interest rate is the factor to estimate the net present value (NPV), break-even points, and payback time; which are benchmarks for profitability evaluation. The positive NPV and reasonable payback time represent a profitable process, and provide an acceptable projection for real operating. Additionally, the capacity of the process is another critical factor. The extruding-expelling process and hexane extraction are the two typical approaches used in industry. When the capacities of annual oil production are larger than 12 and 173 million kg respectively, these two processes are profitable. The solvent free approach, known as enzyme assisted aqueous extraction process (EAEP), is profitable when the capacity is larger than 17 million kg of annual oil production. PMID:28991168

  10. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  11. Practical Implementation of Semi-Automated As-Built Bim Creation for Complex Indoor Environments

    NASA Astrophysics Data System (ADS)

    Yoon, S.; Jung, J.; Heo, J.

    2015-05-01

    In recent days, for efficient management and operation of existing buildings, the importance of as-built BIM is emphasized in AEC/FM domain. However, fully automated as-built BIM creation is a tough issue since newly-constructed buildings are becoming more complex. To manage this problem, our research group has developed a semi-automated approach, focusing on productive 3D as-built BIM creation for complex indoor environments. In order to test its feasibility for a variety of complex indoor environments, we applied the developed approach to model the `Charlotte stairs' in Lotte World Mall, Korea. The approach includes 4 main phases: data acquisition, data pre-processing, geometric drawing, and as-built BIM creation. In the data acquisition phase, due to its complex structure, we moved the scanner location several times to obtain the entire point clouds of the test site. After which, data pre-processing phase entailing point-cloud registration, noise removal, and coordinate transformation was followed. The 3D geometric drawing was created using the RANSAC-based plane detection and boundary tracing methods. Finally, in order to create a semantically-rich BIM, the geometric drawing was imported into the commercial BIM software. The final as-built BIM confirmed that the feasibility of the proposed approach in the complex indoor environment.

  12. An Evaluation of Alternatives for Processing of Administrative Pay Vouchers: A Simulation Approach.

    DTIC Science & Technology

    1982-09-01

    Finance Travel Voucher Q-GERT Productivity Personnel Forecasts Simulation Model 20. ABSTRACT (Continue on reverse side if necessary end Jdentfly by...Finance Office (ACF) has devised a Point System for use in determining the productivity of the ACF Travel Section (ACFTT). This Point System sets values...5 to 5+) to be assigned to incoming travel vouchers based on voucher complexity. This research had set objectives of (1) building an ACFTT model that

  13. Transport phenomena in helical edge state interferometers: A Green's function approach

    NASA Astrophysics Data System (ADS)

    Rizzo, Bruno; Arrachea, Liliana; Moskalets, Michael

    2013-10-01

    We analyze the current and the shot noise of an electron interferometer made of the helical edge states of a two-dimensional topological insulator within the framework of nonequilibrium Green's functions formalism. We study, in detail, setups with a single and with two quantum point contacts inducing scattering between the different edge states. We consider processes preserving the spin as well as the effect of spin-flip scattering. In the case of a single quantum point contact, a simple test based on the shot-noise measurement is proposed to quantify the strength of the spin-flip scattering. In the case of two single point contacts with the additional ingredient of gate voltages applied within a finite-size region at the top and bottom edges of the sample, we identify two types of interference processes in the behavior of the currents and the noise. One such process is analogous to that taking place in a Fabry-Pérot interferometer, while the second one corresponds to a configuration similar to a Mach-Zehnder interferometer. In the helical interferometer, these two processes compete.

  14. A review of blepharochalasis and other causes of the lax, wrinkled eyelid.

    PubMed

    Held, J L; Schneiderman, P

    1990-02-01

    Cosmetically unappealing lax, wrinkled eyelid skin may result from various processes including connective tissue diseases, natural aging, and blepharochalasis. Since the end-stage eyelid changes due to several different processes are similar, the presence or absence of prior chronic or recurrent eyelid edema is an important differentiating point. We review blepharochalasis and provide a logical approach to its differential diagnosis.

  15. Effects of Tasks on BOLD Signal Responses to Sentence Contrasts: Review and Commentary

    PubMed Central

    Caplan, David; Gow, David

    2010-01-01

    Functional neuroimaging studies of syntactic processing have been interpreted as identifying the neural locations of parsing and interpretive operations. However, current behavioral studies of sentence processing indicate that many operations occur simultaneously with parsing and interpretation. In this review, we point to issues that arise in discriminating the effects of these concurrent processes from those of the parser/interpreter in neural measures and to approaches that may help resolve them. PMID:20932562

  16. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  17. Are there reasons to challenge a symbolic computationalist approach in explaining deductive reasoning?

    PubMed

    Faiciuc, Lucia E

    2008-06-01

    The majority of the existing theories explaining deductive reasoning could be included in a classic computationalist approach of the cognitive processes. In fact, deductive reasoning could be seen to be the pinnacle of the symbolic computationalism, its last fortress to be defended in the face of new, dynamic, and ecological perspectives over cognition. But are there weak points in that position regarding deductive reasoning? What would be the reasons for which new perspectives could gain in credibility? What could be their most important tenets? The answers given to those questions in the paper include two main points. The first one is that the present empirical data could not sustain unambiguously one view over the other, that they are obtained in artificial experimental conditions, and that there are data that are not easily explainable using the traditional computationalist paradigm. The second one is that approaching the deductive reasoning from dynamic and ecological perspectives could have significant advantages. The most obvious one is the possibility to integrate more easily the research regarding the deductive reasoning with the results obtained in other domains of the psychology (especially in what respects the lower cognitive processes), in artificial intelligence or in neurophysiology. The reasons for that would be that such perspectives, as they are sketched in the paper, would imply, essentially, processes of second-order pattern formation and recognition (as it is the case for perception), embodied cognition, and dynamic processes as the brain ones are.

  18. Process is the point: justice and human rights: priority setting and fair deliberative process.

    PubMed

    Gruskin, Sofia; Daniels, Norman

    2008-09-01

    Most people responsible for setting priorities in health have considerable expertise relevant to deciding how to use resources effectively and the kinds of improvements that should be emphasized. Most are also concerned with distributing improvements equitably. Accordingly, they often invoke human rights or principles of distributive justice to legitimize choices that create winners and losers. We propose an approach that draws on the strengths of both perspectives as a way to add legitimacy to efforts to set priorities in health. Our proposal provides a process for setting priorities but is not a formula or an algorithm for generating particular priorities. We propose an approach that would do away with the process through which priorities are set and decisions made, and suggest the value of a focus on the process of legitimizing these decisions.

  19. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    NASA Astrophysics Data System (ADS)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  20. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  1. Treatment of electronic waste to recover metal values using thermal plasma coupled with acid leaching - A response surface modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, Swagat S., E-mail: swagat.rath@gmail.com; Nayak, Pradeep; Mukherjee, P.S.

    2012-03-15

    Highlights: Black-Right-Pointing-Pointer Sentences/phrases were modified. Black-Right-Pointing-Pointer Necessary discussions for different figures were included. Black-Right-Pointing-Pointer More discussion have been included on the flue gas analysis. Black-Right-Pointing-Pointer Queries to both the reviewers have been given. - Abstract: The global crisis of the hazardous electronic waste (E-waste) is on the rise due to increasing usage and disposal of electronic devices. A process was developed to treat E-waste in an environmentally benign process. The process consisted of thermal plasma treatment followed by recovery of metal values through mineral acid leaching. In the thermal process, the E-waste was melted to recover the metal values asmore » a metallic mixture. The metallic mixture was subjected to acid leaching in presence of depolarizer. The leached liquor mainly contained copper as the other elements like Al and Fe were mostly in alloy form as per the XRD and phase diagram studies. Response surface model was used to optimize the conditions for leaching. More than 90% leaching efficiency at room temperature was observed for Cu, Ni and Co with HCl as the solvent, whereas Fe and Al showed less than 40% efficiency.« less

  2. Advanced Mobility Handover for Mobile IPv6 Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches. PMID:25614890

  3. Specifying process requirements for holistic care.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2013-09-01

    Holistic (health and social) care aims at providing comprehensive care to the community, especially to elderly people and people with multiple illnesses. In turn, this requires using health and social care resources more efficiently through enhanced collaboration and coordination among the corresponding organizations and delivering care closer to patient needs and preferences. This paper takes a patient-centered, process view of holistic care delivery and focuses on requirements elicitation for supporting holistic care processes and enabling authorized users to access integrated patient information at the point of care when needed. To this end, an approach to holistic care process-support requirements elicitation is presented which is based on business process modeling and places particular emphasis on empowering collaboration, coordination and information sharing among health and social care organizations by actively involving users and by providing insights for alternative process designs. The approach provides a means for integrating diverse legacy applications in a process-oriented environment using a service-oriented architecture as an appropriate solution for supporting and automating holistic care processes. The approach is applied in the context of emergency medical care aiming at streamlining and providing support technology to cross-organizational health and social care processes to address global patient needs.

  4. SAR correlation technique - An algorithm for processing data with large range walk

    NASA Technical Reports Server (NTRS)

    Jin, M.; Wu, C.

    1983-01-01

    This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.

  5. Managing Contextual Complexity in an Experiential Learning Course: A Dynamic Systems Approach through the Identification of Turning Points in Students' Emotional Trajectories

    PubMed Central

    Nogueiras, Gloria; Kunnen, E. Saskia; Iborra, Alejandro

    2017-01-01

    This study adopts a dynamic systems approach to investigate how individuals successfully manage contextual complexity. To that end, we tracked individuals' emotional trajectories during a challenging training course, seeking qualitative changes–turning points—and we tested their relationship with the perceived complexity of the training. The research context was a 5-day higher education course based on process-oriented experiential learning, and the sample consisted of 17 students. The students used a five-point Likert scale to rate the intensity of 16 emotions and the complexity of the training on 8 measurement points. Monte Carlo permutation tests enabled to identify 30 turning points in the 272 emotional trajectories analyzed (17 students * 16 emotions each). 83% of the turning points indicated a change of pattern in the emotional trajectories that consisted of: (a) increasingly intense positive emotions or (b) decreasingly intense negative emotions. These turning points also coincided with particularly complex periods in the training as perceived by the participants (p = 0.003, and p = 0.001 respectively). The relationship between positively-trended turning points in the students' emotional trajectories and the complexity of the training may be interpreted as evidence of a successful management of the cognitive conflict arising from the clash between the students' prior ways of meaning-making and the challenging demands of the training. One of the strengths of this study is that it provides a relatively simple procedure for identifying turning points in developmental trajectories, which can be applied to various longitudinal experiences that are very common in educational and developmental contexts. Additionally, the findings contribute to sustaining that the assumption that complex contextual demands lead unfailingly to individuals' learning is incomplete. Instead, it is how individuals manage complexity which may or may not lead to learning. Finally, this study can also be considered a first step in research on the developmental potential of process-oriented experiential learning training. PMID:28515703

  6. Ephemeral active regions and coronal bright points: A solar maximum Mission 2 guest investigator study

    NASA Technical Reports Server (NTRS)

    Harvey, K. L.; Tang, F. Y. C.; Gaizauskas, V.; Poland, A. I.

    1986-01-01

    A dominate association of coronal bright points (as seen in He wavelength 10830) was confirmed with the approach and subsequent disappearance of opposite polarity magnetic network. While coronal bright points do occur with ephemeral regions, this association is a factor of 2 to 4 less than with sites of disappearing magnetic flux. The intensity variations seen in He I wavelength 10830 are intermittent and often rapid, varying over the 3 minute time resolution of the data; their bright point counterparts in the C IV wavelength 1548 and 20 cm wavelength show similar, though not always coincident time variations. Ejecta are associated with about 1/3 of the dark points and are evident in the C IV and H alpha data. These results support the idea that the anti-correlation of X-ray bright points with the solar cycle can be explained by the correlation of these coronal emission structures with sites of cancelling flux, indicating that, in some cases, the process of magnetic flux removal results in the release of energy. That the intensity variations are rapid and variable suggests that this process works intermittently.

  7. A probabilistic approach for channel initiation

    Treesearch

    Erkan Istanbulluoglu; David G. Tarboton; Robert T. Pack; Charles H. Luce

    2002-01-01

    The channel head represents an important transition point from hillslope to fluvial processes. There is a nonlinear threshold transition across the channel head with sediment transport much larger in channels than on hillslopes. Deterministic specific catchment area, a, thresholds for channel initiation, sometimes dependent on slope, S...

  8. Point-of-care testing.

    PubMed

    O'Brien, J A

    2000-12-01

    Is POCT worth integrating into a facility? Despite its promise of speed and convenience, this technology requires careful evaluation of potential benefits, disadvantages, and challenges to the existing system. If the pros outweigh the cons, a step-by-step approach can ease the process of implementing a POCT program.

  9. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  10. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  11. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  12. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  13. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  14. Do High Dynamic Range threatments improve the results of Structure from Motion approaches in Geomorphology?

    NASA Astrophysics Data System (ADS)

    Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando

    2015-04-01

    In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).

  15. When a new technological product launching fails: A multi-method approach of facial recognition and E-WOM sentiment analysis.

    PubMed

    Hernández-Fernández, Dra Asunción; Mora, Elísabet; Vizcaíno Hernández, María Isabel

    2018-04-17

    The dual aim of this research is, firstly, to analyze the physiological and unconscious emotional response of consumers to a new technological product and, secondly, link this emotional response to consumer conscious verbal reports of positive and negative product perceptions. In order to do this, biometrics and self-reported measures of emotional response are combined. On the one hand, a neuromarketing experiment based on the facial recognition of emotions of 10 subjects, when physical attributes and economic information of a technological product are exposed, shows the prevalence of the ambivalent emotion of surprise. On the other hand, a nethnographic qualitative approach of sentiment analysis of 67-user online comments characterise the valence of this emotion as mainly negative in the case and context studied. Theoretical, practical and methodological contributions are anticipated from this paper. From a theoretical point of view this proposal contributes valuable information to the product design process, to an effective development of the marketing mix variables of price and promotion, and to a successful selection of the target market. From a practical point of view, the approach employed in the case study on the product Google Glass provides empirical evidence useful in the decision making process for this and other technological enterprises launching a new product. And from a methodological point of view, the usefulness of integrated neuromarketing-eWOM analysis could contribute to the proliferation of this tandem in marketing research. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Network based approaches reveal clustering in protein point patterns

    NASA Astrophysics Data System (ADS)

    Parker, Joshua; Barr, Valarie; Aldridge, Joshua; Samelson, Lawrence E.; Losert, Wolfgang

    2014-03-01

    Recent advances in super-resolution imaging have allowed for the sub-diffraction measurement of the spatial location of proteins on the surfaces of T-cells. The challenge is to connect these complex point patterns to the internal processes and interactions, both protein-protein and protein-membrane. We begin analyzing these patterns by forming a geometric network amongst the proteins and looking at network measures, such the degree distribution. This allows us to compare experimentally observed patterns to models. Specifically, we find that the experimental patterns differ from heterogeneous Poisson processes, highlighting an internal clustering structure. Further work will be to compare our results to simulated protein-protein interactions to determine clustering mechanisms.

  17. Mechanical flip-chip for ultra-high electron mobility devices

    DOE PAGES

    Bennaceur, Keyan; Schmidt, Benjamin A.; Gaucher, Samuel; ...

    2015-09-22

    In this study, electrostatic gates are of paramount importance for the physics of devices based on high-mobility two-dimensional electron gas (2DEG) since they allow depletion of electrons in selected areas. This field-effect gating enables the fabrication of a wide range of devices such as, for example, quantum point contacts (QPC), electron interferometers and quantum dots. To fabricate these gates, processing is usually performed on the 2DEG material, which is in many cases detrimental to its electron mobility. Here we propose an alternative process which does not require any processing of the 2DEG material other than for the ohmic contacts. Thismore » approach relies on processing a separate wafer that is then mechanically mounted on the 2DEG material in a flip-chip fashion. This technique proved successful to fabricate quantum point contacts on both GaAs/AlGaAs materials with both moderate and ultra-high electron mobility.« less

  18. [The history of development of evolutionary methods in St. Petersburg school of computer simulation in biology].

    PubMed

    Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F

    2010-01-01

    The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.

  19. Meshless Local Petrov-Galerkin Method for Solving Contact, Impact and Penetration Problems

    DTIC Science & Technology

    2006-11-30

    Crack Growth 3 point of view, this approach makes the full use of the ex- isting FE models to avoid any model regeneration , which is extremely high in...process, at point C, the pressure reduces to zero, but the volumet- ric strain does not go to zero due to the collapsed void volume. 2.2 Damage...lease rate to go beyond the critical strain energy release rate. Thus, the micro-cracks begin to growth inside these areas. At 10 micro-seconds, these

  20. A Hybrid Approach for Characterizing Linear and Nonlinear Electromagnetic Scattering: Theory and Applications

    DTIC Science & Technology

    2012-11-01

    axis at a 2-m height above the ground and the observation point is at a 1.7-m height along a radial line at ϕ = 30°. Ground properties: εr’ = 4...fields of a horizontal electric dipole as a function of range. The dipole is buried in the ground at a 10-cm depth and the observation point is at...would necessitate the evaluation of a triple integral. To expedite the matrix filling process, different common schemes are available in efficiently

  1. Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images

    NASA Astrophysics Data System (ADS)

    Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2001-05-01

    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.

  2. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  3. Marital Processes around Depression: A Gendered and Relational Perspective.

    PubMed

    Thomeer, Mieke Beth; Umberson, Debra; Pudrovska, Tetyanna

    2013-11-01

    Despite extensive evidence of the importance of marriage and marital processes for mental health, little is known about the interpersonal processes around depression within marriage and the extent to which these processes are gendered. We use a mixed methods approach to explore the importance of gender in shaping processes around depression within marriage; we approach this in two ways. First, using quantitative longitudinal analysis of 2,601 couples from the Health and Retirement Study (HRS), we address whether depressive symptoms in one spouse shape the other spouse's depressive symptoms and whether men or women are more influential in this process. We find that a wife's depressive symptoms influence her husband's future depressive symptoms but a husband's depressive symptoms do not influence his wife's future symptoms. Second, we conduct a qualitative analysis of in-depth interviews with 29 couples wherein one or both spouses experienced depression to provide additional insight into how gender impacts depression and reactions to depression within marriage. Our study points to the importance of cultural scripts of masculinity and femininity in shaping depression and emotional processes within marriage and highlights the importance of applying a gendered couple-level approach to better understand the mental health effects of marital processes.

  4. Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.; Alis, C.

    2016-06-01

    In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  5. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  6. Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN): evolution of a content management system for point-of-care clinical decision support.

    PubMed

    Barwise, Amelia; Garcia-Arguello, Lisbeth; Dong, Yue; Hulyalkar, Manasi; Vukoja, Marija; Schultz, Marcus J; Adhikari, Neill K J; Bonneton, Benjamin; Kilickaya, Oguz; Kashyap, Rahul; Gajic, Ognjen; Schmickl, Christopher N

    2016-10-03

    The Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN) is an international collaborative project with the overall objective of standardizing the approach to the evaluation and treatment of critically ill patients world-wide, in accordance with best-practice principles. One of CERTAIN's key features is clinical decision support providing point-of-care information about common acute illness syndromes, procedures, and medications in an index card format. This paper describes 1) the process of developing and validating the content for point-of-care decision support, and 2) the content management system that facilitates frequent peer-review and allows rapid updates of content across different platforms (CERTAIN software, mobile apps, pdf-booklet) and different languages. Content was created based on survey results of acute care providers and validated using an open peer-review process. Over a 3 year period, CERTAIN content expanded to include 67 syndrome cards, 30 procedure cards, and 117 medication cards. 127 (59 %) cards have been peer-reviewed so far. Initially MS Word® and Dropbox® were used to create, store, and share content for peer-review. Recently Google Docs® was used to make the peer-review process more efficient. However, neither of these approaches met our security requirements nor has the capacity to instantly update the different CERTAIN platforms. Although we were able to successfully develop and validate a large inventory of clinical decision support cards in a short period of time, commercially available software solutions for content management are suboptimal. Novel custom solutions are necessary for efficient global point of care content system management.

  7. Positive contraction mappings for classical and quantum Schrödinger systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Tryphon T.; Pavon, Michele

    2015-03-01

    The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.

  8. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  9. Revisiting the diffusion approximation to estimate evolutionary rates of gene family diversification.

    PubMed

    Gjini, Erida; Haydon, Daniel T; David Barry, J; Cobbold, Christina A

    2014-01-21

    Genetic diversity in multigene families is shaped by multiple processes, including gene conversion and point mutation. Because multi-gene families are involved in crucial traits of organisms, quantifying the rates of their genetic diversification is important. With increasing availability of genomic data, there is a growing need for quantitative approaches that integrate the molecular evolution of gene families with their higher-scale function. In this study, we integrate a stochastic simulation framework with population genetics theory, namely the diffusion approximation, to investigate the dynamics of genetic diversification in a gene family. Duplicated genes can diverge and encode new functions as a result of point mutation, and become more similar through gene conversion. To model the evolution of pairwise identity in a multigene family, we first consider all conversion and mutation events in a discrete manner, keeping track of their details and times of occurrence; second we consider only the infinitesimal effect of these processes on pairwise identity accounting for random sampling of genes and positions. The purely stochastic approach is closer to biological reality and is based on many explicit parameters, such as conversion tract length and family size, but is more challenging analytically. The population genetics approach is an approximation accounting implicitly for point mutation and gene conversion, only in terms of per-site average probabilities. Comparison of these two approaches across a range of parameter combinations reveals that they are not entirely equivalent, but that for certain relevant regimes they do match. As an application of this modelling framework, we consider the distribution of nucleotide identity among VSG genes of African trypanosomes, representing the most prominent example of a multi-gene family mediating parasite antigenic variation and within-host immune evasion. © 2013 Published by Elsevier Ltd. All rights reserved.

  10. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  11. Philosophizing about Teacher Dissatisfaction: A Multidisciplinary Hermeneutic Approach

    ERIC Educational Resources Information Center

    Santoro, Doris A.

    2015-01-01

    In this methodological reflection, I describe the multidisciplinary hermeneutic process of philosophizing about teacher dissatisfaction. I discuss how philosophy serves as a starting point for interpretive work based on interviews with former teachers and readings of qualitative and quantitative research on teacher attrition and dissatisfaction.…

  12. Non-uniform sampling: post-Fourier era of NMR data collection and processing.

    PubMed

    Kazimierczuk, Krzysztof; Orekhov, Vladislav

    2015-11-01

    The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.

  13. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  14. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  15. Single Point vs. Mapping Approach for Spectral Cytopathology (SCP)

    PubMed Central

    Schubert, Jennifer M.; Mazur, Antonella I.; Bird, Benjamin; Miljković, Miloš; Diem, Max

    2011-01-01

    In this paper we describe the advantages of collecting infrared microspectral data in imaging mode opposed to point mode. Imaging data are processed using the PapMap algorithm, which co-adds pixel spectra that have been scrutinized for R-Mie scattering effects as well as other constraints. The signal-to-noise quality of PapMap spectra will be compared to point spectra for oral mucosa cells deposited onto low-e slides. Also the effects of software atmospheric correction will be discussed. Combined with the PapMap algorithm, data collection in imaging mode proves to be a superior method for spectral cytopathology. PMID:20449833

  16. Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.

    2012-07-01

    The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.

  17. MeetingVis: Visual Narratives to Assist in Recalling Meeting Context and Content.

    PubMed

    Shi, Yang; Bryan, Chris; Bhamidipati, Sridatt; Zhao, Ying; Zhang, Yaoxue; Ma, Kwan-Liu

    2018-06-01

    In team-based workplaces, reviewing and reflecting on the content from a previously held meeting can lead to better planning and preparation. However, ineffective meeting summaries can impair this process, especially when participants have difficulty remembering what was said and what its context was. To assist with this process, we introduce MeetingVis, a visual narrative-based approach to meeting summarization. MeetingVis is composed of two primary components: (1) a data pipeline that processes the spoken audio from a group discussion, and (2) a visual-based interface that efficiently displays the summarized content. To design MeetingVis, we create a taxonomy of relevant meeting data points, identifying salient elements to promote recall and reflection. These are mapped to an augmented storyline visualization, which combines the display of participant activities, topic evolutions, and task assignments. For evaluation, we conduct a qualitative user study with five groups. Feedback from the study indicates that MeetingVis effectively triggers the recall of subtle details from prior meetings: all study participants were able to remember new details, points, and tasks compared to an unaided, memory-only baseline. This visual-based approaches can also potentially enhance the productivity of both individuals and the whole team.

  18. Application of System Operational Effectiveness Methodology to Space Launch Vehicle Development and Operations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Kelley, Gary W.

    2012-01-01

    The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.

  19. Achieving performance breakthroughs in an HMO business process through quality planning.

    PubMed

    Hanan, K B

    1993-01-01

    Kaiser Permanente's Georgia Region commissioned a quality planning team to design a new process to improve payments to its suppliers and vendors. The result of the team's effort was a 73 percent reduction in cycle time. This team's experiences point to the advantages of process redesign as a quality planning model, as well as some general guidelines for its most effective use in teams. If quality planning project teams are carefully configured, sufficiently expert in the existing process, and properly supported by management, organizations can achieve potentially dramatic improvements in process performance using this approach.

  20. Improving calibration and validation of cosmic-ray neutron sensors in the light of spatial sensitivity

    NASA Astrophysics Data System (ADS)

    Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen

    2017-10-01

    In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.

  1. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less

  2. Instantaneous Transfer Entropy for the Study of Cardiovascular and Cardiorespiratory Nonstationary Dynamics.

    PubMed

    Valenza, Gaetano; Faes, Luca; Citi, Luca; Orini, Michele; Barbieri, Riccardo

    2018-05-01

    Measures of transfer entropy (TE) quantify the direction and strength of coupling between two complex systems. Standard approaches assume stationarity of the observations, and therefore are unable to track time-varying changes in nonlinear information transfer with high temporal resolution. In this study, we aim to define and validate novel instantaneous measures of TE to provide an improved assessment of complex nonstationary cardiorespiratory interactions. We here propose a novel instantaneous point-process TE (ipTE) and validate its assessment as applied to cardiovascular and cardiorespiratory dynamics. In particular, heartbeat and respiratory dynamics are characterized through discrete time series, and modeled with probability density functions predicting the time of the next physiological event as a function of the past history. Likewise, nonstationary interactions between heartbeat and blood pressure dynamics are characterized as well. Furthermore, we propose a new measure of information transfer, the instantaneous point-process information transfer (ipInfTr), which is directly derived from point-process-based definitions of the Kolmogorov-Smirnov distance. Analysis on synthetic data, as well as on experimental data gathered from healthy subjects undergoing postural changes confirms that ipTE, as well as ipInfTr measures are able to dynamically track changes in physiological systems coupling. This novel approach opens new avenues in the study of hidden, transient, nonstationary physiological states involving multivariate autonomic dynamics in cardiovascular health and disease. The proposed method can also be tailored for the study of complex multisystem physiology (e.g., brain-heart or, more in general, brain-body interactions).

  3. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    NASA Astrophysics Data System (ADS)

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our approach is relevant and accurate as it can handle the most frequent layouts of dashed lines. Some issues, for instance, such as the relative weighting of both terms of the energy are also discussed in the conclusion.

  4. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    PubMed Central

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  5. The interprocess NIR sampling as an alternative approach to multivariate statistical process control for identifying sources of product-quality variability.

    PubMed

    Marković, Snežana; Kerč, Janez; Horvat, Matej

    2017-03-01

    We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.

  6. Fluctuations in Tat copy number when it counts the most: a possible mechanism to battle the HIV latency

    PubMed Central

    2013-01-01

    The HIV-1 virus can enter a dormant state and become inactive, which reduces accessibility by antiviral drugs. We approach this latency problem from an unconventional point of view, with the focus on understanding how intrinsic chemical noise (copy number fluctuations of the Tat protein) can be used to assist the activation process of the latent virus. Several phase diagrams have been constructed in order to visualize in which regions of the parameter space noise can drive the activation process. Essential to the study is the use of a hyperbolic coordinate system, which greatly facilitates quantification of how the various reaction rate combinations shape the noise behavior of the Tat protein feedback system. We have designed a mathematical manual of how to approach the problem of activation quantitatively, and introduce the notion of an “operating point” of the virus. For both noise-free and noise-based strategies we show how operating point off-sets induce changes in the number of Tat molecules. The major result of the analysis is that for every noise-free strategy there is a noise-based strategy that requires lower dosage, but achieves the same anti-latency effect. It appears that the noise-based activation is advantageous for every operating point. PMID:23497153

  7. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  8. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  9. a Conceptual Framework for Indoor Mapping by Using Grammars

    NASA Astrophysics Data System (ADS)

    Hu, X.; Fan, H.; Zipf, A.; Shang, J.; Gu, F.

    2017-09-01

    Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users' location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  10. Modelling the crystallization of the globular proteins

    NASA Astrophysics Data System (ADS)

    Shiryayev, Andrey S.

    Crystallization of globular proteins has become a very important subject in recent yearn. However there is still no understanding of the particular conditions that lead to the crystallization. Since nucleation of a crystalline droplet is the critical step toward the formation of the solid phase from the supersaturated solution, this is the focus of current studies. In this work we use different approaches to investigate the collective behavior of a system of globular proteins. Especially we focused on the models which have a metastable critical point, because this reflects the properties of solutions of globular proteins. The first approach is a continuum model of globular proteins. This model was first presented by Talanquer and Oxtoby and is based on the van der Waals theory. The model can have either a stable or a metastable critical point. For the system with the metastable critical point we studied the behavior of the free energy barrier to nucleation; we found that along particular pathways the barrier to nucleation has a minimim around the critical point. As well, the number of molecules in the critical cluster was found to diverge as one approaches the critical point, though most of the molecules are in the fluid tail of the droplet. Our results are an extension of earlier work [17, 7]. The properties of the solvent affect the behavior of the solution. In our second approach, we proposed a model that takes into account the contribution of the solvent free energy to the free energy of the globular proteins. We show that one can map the phase diagram of a repulsive hard core plus attractive square well interacting system to the same system particles in the solvent environment. In particular we show that this leads to phase diagrams with upper critical points, lower critical points and even closed loops with both upper and lower critical points, similar to the one found before [10]. For systems with interaction different from the square well, in the presence of the solvent this mapping procedure can be a first approximation to understand the phase diagram. The final part of this work is dedicated to the behavior of sickle hemoglobin. While the fluid behavior of the HbS molecules can be approximately explained by the uniform interparticle potential, this model fails to describe the polymerization process and the particular structure of fibers. We develop an anisotropic "patchy" model to describe some features of the HbS polymerization process. To determine the degree of polymerization of the system a "patchy" order parameter was defined. Monte Carlo simulations for the simple two-patch model was performed and reveal the possibility of obtaining chains that can be considered as one dimensional crystals.

  11. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  12. Dynamical Interplay between Awareness and Epidemic Spreading in Multiplex Networks

    NASA Astrophysics Data System (ADS)

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2013-09-01

    We present the analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the information awareness to prevent its infection, on top of multiplex networks. This scenario is representative of an epidemic process spreading on a network of persistent real contacts, and a cyclic information awareness process diffusing in the network of virtual social contacts between the same individuals. The topology corresponds to a multiplex network where two diffusive processes are interacting affecting each other. The analysis using a microscopic Markov chain approach reveals the phase diagram of the incidence of the epidemics and allows us to capture the evolution of the epidemic threshold depending on the topological structure of the multiplex and the interrelation with the awareness process. Interestingly, the critical point for the onset of the epidemics has a critical value (metacritical point) defined by the awareness dynamics and the topology of the virtual network, from which the onset increases and the epidemics incidence decreases.

  13. Dynamical interplay between awareness and epidemic spreading in multiplex networks.

    PubMed

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2013-09-20

    We present the analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the information awareness to prevent its infection, on top of multiplex networks. This scenario is representative of an epidemic process spreading on a network of persistent real contacts, and a cyclic information awareness process diffusing in the network of virtual social contacts between the same individuals. The topology corresponds to a multiplex network where two diffusive processes are interacting affecting each other. The analysis using a microscopic Markov chain approach reveals the phase diagram of the incidence of the epidemics and allows us to capture the evolution of the epidemic threshold depending on the topological structure of the multiplex and the interrelation with the awareness process. Interestingly, the critical point for the onset of the epidemics has a critical value (metacritical point) defined by the awareness dynamics and the topology of the virtual network, from which the onset increases and the epidemics incidence decreases.

  14. Anomaly diffuse and dielectric relaxation in strontium doped lanthanum molybdate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiao; Fan, Huiqing, E-mail: hqfan3@163.com; Shi, Jing

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer The anomaly diffuse and dielectric relaxation behaviors are fitted by the Cole-Cole approach. Black-Right-Pointing-Pointer The peak in the LSMO is corresponding to different oxygen ion diffusion process. Black-Right-Pointing-Pointer We first give better explanation about the strange conductivity change caused by doping. Black-Right-Pointing-Pointer The oxygen ion diffusion is due to a combination of the dipolar relaxation and the motion of ions. -- Abstract: The dielectric properties of the La{sub 2-x}Sr{sub x}Mo{sub 2}O{sub 9-{delta}} (x = 0-0.2) ceramics were investigated in the temperature range of 300-800 K. Dielectric measurement reveals that two dielectric anomalies, associated with the oxygen ion diffusion,more » exist in frequency spectrum with x = 0.5. The broad dielectric peaks in tan {delta}({omega}) can be well fitted by a modified Cole-Cole approach. When x = 0.1, only one dielectric relaxation peak is observed, corresponding to different oxygen ion diffusion processes, as distinct from the only relaxation peak in the pure La{sub 2}Mo{sub 2}O{sub 9}. The relaxation parameters {tau}{sub 0}, the dielectric relaxation strength {Delta}, and the activation energy E{sub a} were obtained. The result of this work shows that, the conductivity change caused by doping between the two phases is due to the combination of the dipolar effects and motion of ions.« less

  15. Health impact assessment: A comparison of 45 local, national, and international guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebert, Katherine A., E-mail: jsx3@cdc.gov; Wendel, Arthur M., E-mail: dvq6@cdc.gov; Kennedy, Sarah K., E-mail: heaton.sarah@gmail.com

    2012-04-15

    This article provides a comparison of health impact assessment (HIA) guidelines from around the world and for multiple geographic scales. We identify commonalities and differences within HIA guides to discuss the plausibility of consensus guidelines and to inform guideline development. The practice of HIA has grown over the last two decades with a concurrent growth of HIA guides. This study expands on earlier review work and includes guides published since 2007 (Mindell, Boltong and Forde, 2008). From April 2010 to October 2011, 45 HIA guides were identified through an internet search and review of previous research. Common characteristics, key features,more » and the HIA process were analyzed. The 45 documents recommended similar but not identical processes for conducting HIAs. These analyses suggest that guidelines for HIAs are similar in many areas of the world and that new HIA practitioners can use these findings to inform their approach. Further discussion is needed to determine if the approaches established in these guidelines are followed and if one set of common guidelines could be written for use in numerous countries and regions. - Highlights: Black-Right-Pointing-Pointer We analyze 45 health impact assessment (HIA) guidelines worldwide. Black-Right-Pointing-Pointer We examine similarities and unique attributes of each guideline. Black-Right-Pointing-Pointer We discuss the advantages and disadvantages of developing consensus guidelines. Black-Right-Pointing-Pointer Identifying additional guidelines aides in future HIA work and evaluation.« less

  16. NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION

    EPA Science Inventory

    Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...

  17. Introduction to a New Approach to Experiential Learning.

    ERIC Educational Resources Information Center

    Jackson, Lewis; MacIsaac, Doug

    1994-01-01

    A process model for experiential learning (EL) in adult education begins with the characteristics and needs of adult learners and conceptual foundations of EL. It includes methods and techniques for in-class and field-based experiences, building a folio (point-in-time performance assessment), and portfolio construction (assessing transitional…

  18. Think Quality! The Deming Approach Does Work in Libraries.

    ERIC Educational Resources Information Center

    Mackey, Terry; Mackey, Kitty

    1992-01-01

    Presents W. Edwards Deming's Total Quality Management method and advocates its adoption in libraries. The 14 points that form the basis of Deming's philosophy are discussed in the context of the library setting. A flow chart of the reference process and user survey questions are included. (MES)

  19. Making Sense of Energy

    ERIC Educational Resources Information Center

    Boohan, Richard

    2014-01-01

    This article describes an approach to teaching about the energy concept that aims to be accessible to students starting in early secondary school, while being scientifically rigorous and forming the foundation for later work. It discusses how exploring thermal processes is a good starting point for a more general consideration of the ways that…

  20. A Mixed Methods Approach for Identifying Influence on Public Policy

    ERIC Educational Resources Information Center

    Weaver-Hightower, Marcus B.

    2014-01-01

    Fields from political science to critical education policy studies have long explored power relations in policy processes, showing who influences policy agendas, policy creation, and policy implementation. Yet showing particular actors' influence on specific points in a policy text remains a methodological challenge. This article presents a…

  1. Arrows: A Special Case of Graphic Communication.

    ERIC Educational Resources Information Center

    Hardin, Pris

    The purpose of this paper is to examine arrow design in relation to the type of pointing, connecting, or processing involved. Three possible approaches to the investigation of arrows as graphic communication include research: by arrow function, relating message structure to arrow design, and linking user expectations to arrow design. The following…

  2. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  3. A Narrative Metaphor to Facilitate Educational Game Authoring

    ERIC Educational Resources Information Center

    Marchiori, Eugenio J.; Torrente, Javier; del Blanco, Angel; Moreno-Ger, Pablo; Sancho, Pilar; Fernandez-Manjon, Baltasar

    2012-01-01

    In this paper we present WEEV (Writing Environment for Educational Video games), a methodology for educational "point-and-click" adventure game authoring. Our approach aims to allow educators to actively collaborate in the educational game development process, using a narrative-based representation. WEEV is based on a pragmatic reinterpretation of…

  4. The Internet and Technical Services: A Point Break Approach.

    ERIC Educational Resources Information Center

    McCombs, Gillian M.

    1994-01-01

    Discusses implications of using the Internet for library technical services. Topics addressed include creative uses of the Internet; three basic applications on the Internet, i.e., electronic mail, remote log-in to another computer, and file transfer; electronic processing of information; electronic access to information; and electronic processing…

  5. Shhhh! Don't Tell: Advertising Design Impacts Sales.

    ERIC Educational Resources Information Center

    Schaub, Laura; Kelsey, Roy

    2000-01-01

    Discusses the creation of an advertisement to catch the attention of the target audience: student readers. Notes the consideration of several important factors including: the product, the audience, the positioning, the principles, and the ingredients. Describes ways to get started and several points in approaching the design process. (SC)

  6. Nurturing and Testing Translation Competence for Text-Translating

    ERIC Educational Resources Information Center

    Aubakirova, Karlygash Adilkhanovna

    2016-01-01

    The article analyzes the problems of contemporary professional education. As its instance, we examine the developmental scheme for training professional translators. Optimal ways of organizing the learning process are suggested from the point of view of the competence approach, which is widely recognized for training a modern specialist. The…

  7. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  8. Geological nominations at UNESCO World Heritage, an upstream struggle

    NASA Astrophysics Data System (ADS)

    Olive-Garcia, Cécile; van Wyk de Vries, Benjamin

    2017-04-01

    Using my 10 years experience in setting up and defending a UNESCO world Heritage Geological nomination, this presentation aims to give a personal insight into this international process and the differential use of science, subjective perception (aesthetic and 'naturality'), and politics. At this point in the process, new protocols have been tested in order to improve the dialogue, accountability and transparency between the different stake-holders. These are, the State parties, the IUCN, the scientific community, and UNESCO itself. Our proposal is the Chaîne des Puys-Limagne fault ensemble, which combines tectonic, geomorphological evolution and volcanology. The project's essence is a conjunction of inseparable geological features and processes, set in the context of plate tectonics. This very unicit yof diverse forms and processes creates the value of the site. However, it is just this that has caused a problem, as the advisory body has a categorical approach of nominations that separates items to assess them in an unconnected manner.From the start we proposed a combined approach, where a property is seen in its entirety, and the constituent elements seen as interlinked elements reflecting the joint underlying phenomena. At this point, our project has received the first ever open review by an independent technical mission (jointly set up by IUCN, UNESCO and the State party). The subsequent report was broadly supportive of the project's approach and of the value of the ensemble of features. The UNESCO committee in 2016, re-referred the nomination, acknowledging the potential Outstanding Universal Value of the site and requesting the parties to continue the upstream process (e.g. collaborative work), notably on the recommendations and conclusions of the Independent Technical mission report. Meetings are continuing, and I shall provide you with the hot-off-the-press news as this ground breaking nomination progresses.

  9. Brushing Your Spacecrafts Teeth: A Review of Biological Reduction Processes for Planetary Protection Missions

    NASA Technical Reports Server (NTRS)

    Pugel, D. E. (Betsy); Rummel, J. D.; Conley, Catharine

    2017-01-01

    Much like keeping your teeth clean, where you brush away biofilms that your dentist calls "plaque," there are various methods to clean spaceflight hardware of biological contamination, known as biological reduction processes. Different approaches clean your hardware's "teeth" in different ways and with different levels of effectiveness. We know that brushing at home with a simple toothbrush is convenient and has a different level of impact vs. getting your teeth cleaned at the dentist. In the same way, there are some approaches to biological reduction that may require simple tools or more complex implementation approaches (think about sonicating or just soaking your dentures, vs. brushing them). There are also some that are more effective for different degrees of cleanliness and still some that have materials compatibility concerns. In this article, we review known and NASA-certified approaches for biological reduction, pointing out materials compatibility concerns and areas where additional research is needed.

  10. Brushing Your Spacecrafts Teeth: A Review of Biological Reduction Processes for Planetary Protection Missions

    NASA Technical Reports Server (NTRS)

    Pugel, D.E. (Betsy); Rummel, J. D.; Conley, C. A.

    2017-01-01

    Much like keeping your teeth clean, where you brush away biofilms that your dentist calls plaque, there are various methods to clean spaceflight hardware of biological contamination, known as biological reduction processes. Different approaches clean your hardwares teeth in different ways and with different levels of effectiveness. We know that brushing at home with a simple toothbrush is convenient and has a different level of impact vs. getting your teeth cleaned at the dentist. In the same way, there are some approaches to biological reduction that may require simple tools or more complex implementation approaches (think about sonicating or just soaking your dentures, vs. brushing them). There are also some that are more effective for different degrees of cleanliness and still some that have materials compatibility concerns. In this article, we review known and NASA-certified approaches for biological reduction, pointing out materials compatibility concerns and areas where additional research is needed.

  11. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  12. Modeling Electronic-Nuclear Interactions for Excitation Energy Transfer Processes in Light-Harvesting Complexes.

    PubMed

    Lee, Mi Kyung; Coker, David F

    2016-08-18

    An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.

  13. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    PubMed

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.

  14. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  15. The business process management software for successful quality management and organization: A case study from the University of Split School of Medicine.

    PubMed

    Sapunar, Damir; Grković, Ivica; Lukšić, Davor; Marušić, Matko

    2016-05-01

    Our aim was to describe a comprehensive model of internal quality management (QM) at a medical school founded on the business process analysis (BPA) software tool. BPA software tool was used as the core element for description of all working processes in our medical school, and subsequently the system served as the comprehensive model of internal QM. The quality management system at the University of Split School of Medicine included the documentation and analysis of all business processes within the School. The analysis revealed 80 weak points related to one or several business processes. A precise analysis of medical school business processes allows identification of unfinished, unclear and inadequate points in these processes, and subsequently the respective improvements and increase of the QM level and ultimately a rationalization of the institution's work. Our approach offers a potential reference model for development of common QM framework allowing a continuous quality control, i.e. the adjustments and adaptation to contemporary educational needs of medical students. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.

  16. A COMPARISON OF TRANSIENT INFINITE ELEMENTS AND TRANSIENT KIRCHHOFF INTEGRAL METHODS FOR FAR FIELD ACOUSTIC ANALYSIS

    DOE PAGES

    WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...

    2013-04-01

    Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less

  17. Leveraging ITIL/ITSM into Network Operations

    DTIC Science & Technology

    2011-06-01

    the true ITIL success stories. Indeed, Disney made an investment in the ITIL process; the return on that investment was increased revenue and...the earliest point in their careers. Disney took this approach in the early 2000s and trained 250 of their personnel. Of the 250 trained personnel, 50...mitment and buy-in. For example, the Internal Revenue Service (IRS) committed to the ITIL process and implemented a five-year strategic plan from

  18. Application of Bayesian techniques to model the burden of human salmonellosis attributable to U.S. food commodities at the point of processing: adaptation of a Danish model.

    PubMed

    Guo, Chuanfa; Hoekstra, Robert M; Schroeder, Carl M; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J; Cole, Dana

    2011-04-01

    Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. © Mary Ann Liebert, Inc.

  19. Application of Bayesian Techniques to Model the Burden of Human Salmonellosis Attributable to U.S. Food Commodities at the Point of Processing: Adaptation of a Danish Model

    PubMed Central

    Guo, Chuanfa; Hoekstra, Robert M.; Schroeder, Carl M.; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G.; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J.

    2011-01-01

    Abstract Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. PMID:21235394

  20. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    PubMed

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  1. Determination of welding residual stresses by inverse approach with eigenstrain formulations of boundary integral equation

    NASA Astrophysics Data System (ADS)

    Ma, Hang; Wang, Ying; Qin, Qing-Hua

    2011-04-01

    Based on the concept of eigenstrain, a straightforward computational model of the inverse approach is proposed for determining the residual stress field induced by welding using the eigenstrain formulations of boundary integral equations. The eigenstrains are approximately expressed in terms of low-order polynomials in the local area around welded zones. The domain integrals with polynomial eigenstrains are transformed into the boundary integrals to preserve the favourable features of the boundary-only discretization in the process of numerical solutions. The sensitivity matrices in the inverse approach for evaluating the eigenstrain fields are constructed by either the measured deformations (displacements) on the boundary or the measured stresses in the domain after welding over a number of selected measuring points, or by both the measured information. It shows from the numerical examples that the results of residual stresses from deformation measurements are always better than those from stress measurements but they are sensitive to the noises from experiments. The results from stress measurements can be improved by introducing a few deformation measuring points while reducing the number of points for stress measuring to reduce the cost since the measurement of deformation is easier than that of stresses in practice.

  2. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  3. Supervised restoration of degraded medical images using multiple-point geostatistics.

    PubMed

    Pham, Tuan D

    2012-06-01

    Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Intelligent Process Abnormal Patterns Recognition and Diagnosis Based on Fuzzy Logic.

    PubMed

    Hou, Shi-Wang; Feng, Shunxiao; Wang, Hui

    2016-01-01

    Locating the assignable causes by use of the abnormal patterns of control chart is a widely used technology for manufacturing quality control. If there are uncertainties about the occurrence degree of abnormal patterns, the diagnosis process is impossible to be carried out. Considering four common abnormal control chart patterns, this paper proposed a characteristic numbers based recognition method point by point to quantify the occurrence degree of abnormal patterns under uncertain conditions and a fuzzy inference system based on fuzzy logic to calculate the contribution degree of assignable causes with fuzzy abnormal patterns. Application case results show that the proposed approach can give a ranked causes list under fuzzy control chart abnormal patterns and support the abnormity eliminating.

  5. Antennal pointing at a looming object in the cricket Acheta domesticus.

    PubMed

    Yamawaki, Yoshifumi; Ishibashi, Wakako

    2014-01-01

    Antennal pointing responses to approaching objects were observed in the house cricket Acheta domesticus. In response to a ball approaching from the lateral side, crickets oriented the antenna ipsilateral to the ball towards it. In response to a ball approaching from the front, crickets oriented both antennae forward. Response rates of antennal pointing were higher when the ball was approaching from the front than from behind. The antennal angle ipsilateral to the approaching ball was positively correlated with approaching angle of the ball. Obstructing the cricket's sight decreased the response rate of antennal pointing, suggesting that this response was elicited mainly by visual stimuli. Although the response rates of antennal pointing decreased when the object ceased its approach at a great distance from the cricket, antennal pointing appeared to be resistant to habituation and was not substantially affected by the velocity, size and trajectory of an approaching ball. When presented with computer-generated visual stimuli, crickets frequently showed the antennal pointing response to a darkening stimulus as well as looming and linearly-expanding stimuli. Drifting gratings rarely elicited the antennal pointing. These results suggest that luminance change is sufficient to elicit antennal pointing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Augmented Citizen Science for Environmental Monitoring and Education

    NASA Astrophysics Data System (ADS)

    Albers, B.; de Lange, N.; Xu, S.

    2017-09-01

    Environmental monitoring and ecological studies detect and visualize changes of the environment over time. Some agencies are committed to document the development of conservation and status of geotopes and geosites, which is time-consuming and cost-intensive. Citizen science and crowd sourcing are modern approaches to collect data and at the same time to raise user awareness for environmental changes. Citizen scientists can take photographs of point of interests (POI) with smartphones and the PAN App, which is presented in this article. The user is navigated to a specific point and is then guided with an augmented reality approach to take a photo in a specific direction. The collected photographs are processed to time-lapse videos to visualize environmental changes. Users and experts in environmental agencies can use this data for long-term documentation.

  7. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  8. Current algebras, measures quasi-invariant under diffeomorphism groups, and infinite quantum systems with accumulation points

    NASA Astrophysics Data System (ADS)

    Sakuraba, Takao

    The approach to quantum physics via current algebra and unitary representations of the diffeomorphism group is established. This thesis studies possible infinite Bose gas systems using this approach. Systems of locally finite configurations and systems of configurations with accumulation points are considered, with the main emphasis on the latter. In Chapter 2, canonical quantization, quantization via current algebra and unitary representations of the diffeomorphism group are reviewed. In Chapter 3, a new definition of the space of configurations is proposed and an axiom for general configuration spaces is abstracted. Various subsets of the configuration space, including those specifying the number of points in a Borel set and those specifying the number of accumulation points in a Borel set are proved to be measurable using this axiom. In Chapter 4, known results on the space of locally finite configurations and Poisson measure are reviewed in the light of the approach developed in Chapter 3, including the approach to current algebra in the Poisson space by Albeverio, Kondratiev, and Rockner. Goldin and Moschella considered unitary representations of the group of diffeomorphisms of the line based on self-similar random processes, which may describe infinite quantum gas systems with clusters. In Chapter 5, the Goldin-Moschella theory is developed further. Their construction of measures quasi-invariant under diffeomorphisms is reviewed, and a rigorous proof of their conjectures is given. It is proved that their measures with distinct correlation parameters are mutually singular. A quasi-invariant measure constructed by Ismagilov on the space of configurations with accumulation points on the circle is proved to be singular with respect to the Goldin-Moschella measures. Finally a generalization of the Goldin-Moschella measures to the higher-dimensional case is studied, where the notion of covariance matrix and the notion of condition number play important roles. A rigorous construction of measures quasi-invariant under the group of diffeomorphisms of d-dimensional space stabilizing a point is given.

  9. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  10. Intensity of geomorphological processes in NW sector of Pacific rim marginal mountain belts

    NASA Astrophysics Data System (ADS)

    Lebedeva, Ekaterina; Shvarev, Sergey; Gotvansky, Veniamin

    2014-05-01

    Continental marginal mountains, including the mountain belts of Russian Far East, are characterized by supreme terrain contrast, mosaic structure of surface and crust, and rich complex of modern endogenous processes - volcanism, seismicity, and vertical movements. Unstable state of geomorphological systems and activity of relief forming processes here is caused also by deep dissected topography and the type and amount of precipitation. Human activities further stimulate natural processes and increase the risk of local disasters. So these territories have high intensity (or tension) of geomorphological processes. Intensity in the authors' understanding is willingness of geomorphological system to be out of balance, risk of disaster under external and internal agent, both natural and human. Mapping with quantitative accounting of intensity of natural and human potential impact is necessary for indication the areal distribution trends of geomorphological processes intensity and zones of potential risk of disasters. Methods of map drowning up are based on several criteria analyzing: 1) total terrain-form processes and their willingness to be a hazard-like, 2) existence, peculiarity and zoning of external agents which could cause extreme character of base processes within the territory, 3) peculiarity of terrain morphology which could cause hazard way of terrain-form processes. Seismic activity is one of the most important factors causing activation of geomorphological processes and contributing to the risk of dangerous situations. Earthquake even small force can provoke many catastrophic processes: landslides, mudslides, avalanches and mudflows, tsunami and others. Seismic gravitational phenomenons of different scale accompany almost all earthquakes of intensity 7-8 points and above, and some processes, such as avalanches, activated by seismic shocks intensity about 1-3 points. In this regard, we consider it important selection of high intensity seismic zones in marginal-continental mountain systems and also offer to give them extra points of tension, the number of which increases depending on the strength of the shock. Such approach allows to identify clearly the most potentially hazardous areas where there may be various, sometimes unpredictable scale catastrophic processes, provoked intense underground tremors. We also consider the impact of the depth of topography dissection and the total amount of precipitation. The marginal-continental mountain systems have often radically different moistening of coastal and inland slopes. And this difference can be 500, 1000 mm and more, that, undoubtedly, affects the course and intensity of geomorphological processes on slopes of different exposures. The total evaluation of intensity of geomorphologic processes exceeding 15 points is considered to be potentially catastrophic. At 10-15 points tension geomorphologic processes is extremely high, and at 5-10 points - high, less than 5 points - low. The maps of the key areas of the Russian Far East - Kamchatka and the north of Kuril Islands, Sakhalin and the Western Okhotsk region were compiled. These areas have differences in geodynamic regimes, landscape-climatic and anthropogenic conditions and highly significant in relation to the differentiated estimation of geomorphologic tension. The growth of intensity of geomorphological processes toward the Pacific Ocean was recorded: from 7-10 points in Western Okhotsk region to 10-13 at Sakhalin and to 13-15 points for Kamchatka.

  11. Dispersion of Nanomaterials in Aqueous Media: Towards Protocol Optimization.

    PubMed

    Kaur, Inder; Ellis, Laura-Jayne; Romer, Isabella; Tantra, Ratna; Carriere, Marie; Allard, Soline; Mayne-L'Hermite, Martine; Minelli, Caterina; Unger, Wolfgang; Potthoff, Annegret; Rades, Steffi; Valsami-Jones, Eugenia

    2017-12-25

    The sonication process is commonly used for de-agglomerating and dispersing nanomaterials in aqueous based media, necessary to improve homogeneity and stability of the suspension. In this study, a systematic step-wise approach is carried out to identify optimal sonication conditions in order to achieve a stable dispersion. This approach has been adopted and shown to be suitable for several nanomaterials (cerium oxide, zinc oxide, and carbon nanotubes) dispersed in deionized (DI) water. However, with any change in either the nanomaterial type or dispersing medium, there needs to be optimization of the basic protocol by adjusting various factors such as sonication time, power, and sonicator type as well as temperature rise during the process. The approach records the dispersion process in detail. This is necessary to identify the time points as well as other above-mentioned conditions during the sonication process in which there may be undesirable changes, such as damage to the particle surface thus affecting surface properties. Our goal is to offer a harmonized approach that can control the quality of the final, produced dispersion. Such a guideline is instrumental in ensuring dispersion quality repeatability in the nanoscience community, particularly in the field of nanotoxicology.

  12. Dispersion of Nanomaterials in Aqueous Media: Towards Protocol Optimization

    PubMed Central

    Kaur, Inder; Ellis, Laura-Jayne; Romer, Isabella; Tantra, Ratna; Carriere, Marie; Allard, Soline; Mayne-L'Hermite, Martine; Minelli, Caterina; Unger, Wolfgang; Potthoff, Annegret; Rades, Steffi; Valsami-Jones, Eugenia

    2017-01-01

    The sonication process is commonly used for de-agglomerating and dispersing nanomaterials in aqueous based media, necessary to improve homogeneity and stability of the suspension. In this study, a systematic step-wise approach is carried out to identify optimal sonication conditions in order to achieve a stable dispersion. This approach has been adopted and shown to be suitable for several nanomaterials (cerium oxide, zinc oxide, and carbon nanotubes) dispersed in deionized (DI) water. However, with any change in either the nanomaterial type or dispersing medium, there needs to be optimization of the basic protocol by adjusting various factors such as sonication time, power, and sonicator type as well as temperature rise during the process. The approach records the dispersion process in detail. This is necessary to identify the time points as well as other above-mentioned conditions during the sonication process in which there may be undesirable changes, such as damage to the particle surface thus affecting surface properties. Our goal is to offer a harmonized approach that can control the quality of the final, produced dispersion. Such a guideline is instrumental in ensuring dispersion quality repeatability in the nanoscience community, particularly in the field of nanotoxicology. PMID:29364209

  13. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  14. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  15. Assessing Saudi medical students learning approach using the revised two-factor study process questionnaire.

    PubMed

    Shaik, Shaffi Ahamed; Almarzuqi, Ahmed; Almogheer, Rakan; Alharbi, Omar; Jalal, Abdulaziz; Alorainy, Majed

    2017-08-17

    To assess learning approaches of 1st, 2nd, and 3rd-year medical students by using revised two-factor study process questionnaire, and to assess reliability and validity of the questionnaire. This cross-sectional study was conducted at the College of Medicine, Riyadh, Saudi Arabia in 2014. The revised two-factor study process questionnaire (R-SPQ-2F) was completed by 610 medical students of both genders, from foundation (first year), central nervous system (second year), medicine and surgery (third year) courses. The study process was evaluated by computing mean scores of two research study approaches (deep & surface) using student's t-test and one-way analysis of variance. The internal consistency and construct validity of the questionnaire were assessed using Cronbach's α and factor analysis. The mean score of deep approach was significantly higher than the surface approach among participants(t (770) =7.83, p= 0.000) for the four courses. The mean scores of deep approach were significantly higher among participants with higher grade point average (F (2,768) =13.31, p=0.001) along with more number of study hours by participants (F (2,768) =20.08, p=0.001). The Cronbach's α-values of items at 0.70 indicate the good internal consistency of questionnaire used. Factor analysis confirms two factors (deep and surface approaches) of R-SPQ-2F. The deep approach to learning was the primary approach among 1st, 2nd and 3rd-year King Saud University medical students. This study confirms reliability and validity of the revised two-factor study process questionnaire. Medical educators could use the results of such studies to make required changes in the curriculum.

  16. On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.

    2017-05-01

    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.

  17. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  18. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  19. Possibility Study of Scale Invariant Feature Transform (SIFT) Algorithm Application to Spine Magnetic Resonance Imaging

    PubMed Central

    Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo

    2016-01-01

    The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching. PMID:27064404

  20. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  1. Modules, theories, or islands of expertise? Domain specificity in socialization.

    PubMed

    Gelman, Susan A

    2010-01-01

    The domain-specific approach to socialization processes presented by J. E. Grusec and M. Davidov (this issue) provides a compelling framework for integrating and interpreting a large and disparate body of research findings, and it generates a wealth of testable new hypotheses. At the same time, it introduces core theoretical questions regarding the nature of social interactions, from the perspective of both children and their caregivers. This commentary draws on the literature regarding domain specificity in cognitive development, applauds what is innovative and exciting about applying a domain-specific approach to socialization processes, and points to questions for future research. Foremost among these is what is meant by "domain specificity."

  2. Donkey-assisted rehabilitation program for children: a pilot study.

    PubMed

    De Rose, Paola; Cannas, Elisabetta; Reinger Cantiello, Patrizia

    2011-01-01

    Bonding with animals grants access to the sphere of affectivity and facilitates therapeutic engagement. The methodological approach of donkey-assisted programs is based on mediation, which is characterized by multidirectional relationships (patient-donkey-therapist). The donkey is an excellent facilitator in the motivation-building process, being able to stimulate the child's development by way of active and positive forces that foster psycho-affective and psycho-cognitive development processes. Results of this study, which focused on the child's approach to the donkey, indicate that while communicating with the animal, children rely more on physical expressions than on verbal language. Donkey-assisted rehabilitative sessions can help in identifying children's strong points, on which motivation could be built.

  3. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  4. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  5. Singularity detection by wavelet approach: application to electrocardiogram signal

    NASA Astrophysics Data System (ADS)

    Jalil, Bushra; Beya, Ouadi; Fauvet, Eric; Laligant, Olivier

    2010-01-01

    In signal processing, the region of abrupt changes contains the most of the useful information about the nature of the signal. The region or the points where these changes occurred are often termed as singular point or singular region. The singularity is considered to be an important character of the signal, as it refers to the discontinuity and interruption present in the signal and the main purpose of the detection of such singular point is to identify the existence, location and size of those singularities. Electrocardiogram (ECG) signal is used to analyze the cardiovascular activity in the human body. However the presence of noise due to several reasons limits the doctor's decision and prevents accurate identification of different pathologies. In this work we attempt to analyze the ECG signal with energy based approach and some heuristic methods to segment and identify different signatures inside the signal. ECG signal has been initially denoised by empirical wavelet shrinkage approach based on Steins Unbiased Risk Estimate (SURE). At the second stage, the ECG signal has been analyzed by Mallat approach based on modulus maximas and Lipschitz exponent computation. The results from both approaches has been discussed and important aspects has been highlighted. In order to evaluate the algorithm, the analysis has been done on MIT-BIH Arrhythmia database; a set of ECG data records sampled at a rate of 360 Hz with 11 bit resolution over a 10mv range. The results have been examined and approved by medical doctors.

  6. Investigating an approach to the alliance based on interpersonal defense theory.

    PubMed

    Westerman, Michael A; Muran, J Christopher

    2017-09-01

    Notwithstanding consistent findings of significant relationships between the alliance and outcome, questions remain to be answered about the relatively small magnitude of those correlations, the mechanisms underlying the association, and how to conceptualize the alliance construct. We conducted a preliminary study of an approach to the alliance based on interpersonal defense theory, which is an interpersonal reconceptualization of defense processes, to investigate the promise of this alternative approach as a way to address the outstanding issues. We employed qualitative, theory-building case study methodology, closely examining alliance processes at four time points in the treatment of a case in terms of a case formulation based on interpersonal defense theory. The results suggested that our approach made it possible to recognize key processes in the alliance and that it helps explain how the alliance influences outcome. Our analyses also provided a rich set of concrete illustrations of the alliance phenomena identified by the theory. The findings suggest that an approach to the alliance based on interpersonal defense theory holds promise. However, although the qualitative method we employed has advantages, it also has limitations. We offer suggestions about how future qualitative and quantitative investigations could build on this study.

  7. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  8. Migration and HIV risk: Life histories of Mexican-born men living with HIV in North Carolina

    PubMed Central

    Mann, Lilli; Valera, Erik; Hightow-Weidman, Lisa B.; Barrington, Clare

    2015-01-01

    Latino men in the Southeastern USA are disproportionately affected by HIV, but little is known about how the migration process influences HIV-related risk. In North Carolina (NC), a relatively new immigrant destination, Latino men are predominantly young and from Mexico. We conducted 31 iterative life history interviews with 15 Mexican-born men living with HIV. We used holistic content narrative analysis methods to examine HIV vulnerability in the context of migration and to identify important turning points. Major themes included the prominence of traumatic early life experiences, migration as an ongoing process rather than a finite event, and HIV diagnosis as a final turning point in migration trajectories. Findings provide a nuanced understanding of HIV vulnerability throughout the migration process and have implications including the need for bi-national HIV prevention approaches, improved outreach around early testing and linkage to care, and attention to mental health. PMID:24866206

  9. Restructure Staff Development for Systemic Change

    ERIC Educational Resources Information Center

    Kelly, Thomas F.

    2012-01-01

    This paper presents a systems approach based on the work of W. Edwards Deming to system wide, high impact staff development. Deming has pointed out the significance of structure in systems. By restructuring the process of staff development we can bring about cost effective improvement of the whole system. We can improve student achievement while…

  10. The Origins of Understanding Self and Other: James Mark Baldwin's Theory.

    ERIC Educational Resources Information Center

    Muller, Ulrich; Runions, Kevin

    2003-01-01

    Offers psychologist Baldwin's theory as an approach to the development of social understanding emphasizing gradual differentiation of self and other and rooting this process in embodied activity within a social context. Argues that Baldwin's account provides a stronger starting point for conceptualizing the infant's discovery of other, self, and…

  11. Adaptive management and the Northwest Forest Plan: rhetoric and reality.

    Treesearch

    G.H. Stankey; B.T. Bormann; C. Ryan; B. Shindler; V. Sturtevant; R.N. Clark; C. Philpot

    2003-01-01

    Adaptive management represents a process to use management policies as a source of learning, which in turn can inform subsequent actions. However, despite its appealing and apparently straightforward objectives, examples of successful implementation remain elusive, and a review of efforts to implement an adaptive approach in the Northwest Forest Plan proves the point....

  12. Objective Truths and the Leading of Children: A Response to Rheta DeVries, Betty Zan, and Carolyn Hildebrandt.

    ERIC Educational Resources Information Center

    Goodman, Joan F.

    2002-01-01

    Presents points of agreement with DeVries, Zan, and Hildebrandt: overall educational philosophy, the developmental path from egocentrism to reciprocity, and educational approaches when fundamental ethical principles are at stake. Examines the substantial differences in perspectives regarding the substance of morality, the process of teaching…

  13. The Oral Tradition: Springboard for Teaching Black Literature.

    ERIC Educational Resources Information Center

    Wilson, Velez H.

    Black Americans have inherited a rich store of oral literature (folk music, folk tales, folk customs, and superstitions). When approached from the point of view of the students' experience, this oral tradition can be an effective springboard for discussion, reading, and writing in the classroom. Brass band funeral processions can be viewed as a…

  14. Performance Evaluation of Indian Technical Institutions Using PROMETHEE-GAIA Approach

    ERIC Educational Resources Information Center

    Ranjan, Rajeev; Chakraborty, Shankar

    2015-01-01

    It has now become an important issue to evaluate the performance of technical institutions to develop better research and enrich the existing teaching processes. The results of such performance appraisal would serve as a reference point for decisions to choose a particular institution, hire manpower, and provide financial support for the…

  15. Total Quality Management and Organizational Behavior Management: An Integration for Continual Improvement.

    ERIC Educational Resources Information Center

    Mawhinney, Thomas C.

    1992-01-01

    The history and main features of organizational behavior management (OBM) are compared and integrated with those of total quality management (TQM), with emphasis on W.E. Deming's 14 points and OBM's operant-based approach to performance management. Interventions combining OBM, TQM, and statistical process control are recommended. (DB)

  16. Visualization and Image Analysis of Yeast Cells.

    PubMed

    Bagley, Steve

    2016-01-01

    When converting real-life data via visualization to numbers and then onto statistics the whole system needs to be considered so that conversion from the analogue to the digital is accurate and repeatable. Here we describe the points to consider when approaching yeast cell analysis visualization, processing, and analysis of a population by screening techniques.

  17. NASA System Engineering Design Process

    NASA Technical Reports Server (NTRS)

    Roman, Jose

    2011-01-01

    This slide presentation reviews NASA's use of systems engineering for the complete life cycle of a project. Systems engineering is a methodical, disciplined approach for the design, realization, technical management, operations, and retirement of a system. Each phase of a NASA project is terminated with a Key decision point (KDP), which is supported by major reviews.

  18. The European Union and the Comprehensive Civil-Military Approach in Euro-Atlantic Security: Matching Reality to Rhetoric

    DTIC Science & Technology

    2010-01-01

    aspects below the political PSC level. This practically guaranteed inco - herence and disunity as an institutional inheritance. Second, and equally as...central point of emphasis that “defense and diplomacy are no longer discrete choices . . . but must complement one another throughout the messy process

  19. Infant Stimulation and the Etiology of Cognitive Processes.

    ERIC Educational Resources Information Center

    Fowler, William

    What data, problems, and concepts are most relevant in determining the role of stimulation in human development? A critical analysis of the relationships between long term stimulation, behavior, and cognitive functioning and development points up biases and gaps in past as well as contemporary approaches. Each of the four sections of this paper…

  20. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  1. From global circulation to flood loss: Coupling models across the scales

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Gomez-Navarro, Juan Jose; Bozhinova, Denica; Zischg, Andreas; Raible, Christoph C.; Ole, Roessler; Martius, Olivia; Weingartner, Rolf

    2017-04-01

    The prediction and the prevention of flood losses requires an extensive understanding of underlying meteorological, hydrological, hydraulic and damage processes. Coupled models help to improve the understanding of such underlying processes and therefore contribute the understanding of flood risk. Using such a modelling approach to determine potentially flood-affected areas and damages requires a complex coupling between several models operating at different spatial and temporal scales. Although the isolated parts of the single modelling components are well established and commonly used in the literature, a full coupling including a mesoscale meteorological model driven by a global circulation one, a hydrologic model, a hydrodynamic model and a flood impact and loss model has not been reported so far. In the present study, we tackle the application of such a coupled model chain in terms of computational resources, scale effects, and model performance. From a technical point of view, results show the general applicability of such a coupled model, as well as good model performance. From a practical point of view, such an approach enables the prediction of flood-induced damages, although some future challenges have been identified.

  2. Welding parameter optimization of alloy material by friction stir welding using Taguchi approach and design of experiments

    NASA Astrophysics Data System (ADS)

    Karwande, Amit H.; Rao, Seeram Srinivasa

    2018-04-01

    Friction stir welding (FSW) a welding process in which metals are joint by melting them at their solid state. In different engineering areas such as civil, mechanical, naval and aeronautical engineering beams are widely used of the magnesium alloys for different applications and that are joined by conventional inert gas welding process. Magnesium metal has less density and low melting point for that reason large heat generation in the common welding process so its necessity to adapt new welding process. FSW process increases the weld quality which observed under various mechanical testing by using different tool size.

  3. Shear viscosity in monatomic liquids: a simple mode-coupling approach

    NASA Astrophysics Data System (ADS)

    Balucani, Umberto

    The value of the shear-viscosity coefficient in fluids is controlled by the dynamical processes affecting the time decay of the associated Green-Kubo integrand, the stress autocorrelation function (SACF). These processes are investigated in monatomic liquids by means of a microscopic approach with a minimum use of phenomenological assumptions. In particular, mode-coupling effects (responsible for the presence in the SACF of a long-lasting 'tail') are accounted for by a simplified approach where the only requirement is knowledge of the structural properties. The theory readily yields quantitative predictions in its domain of validity, which comprises ordinary and moderately supercooled 'simple' liquids. The framework is applied to liquid Ar and Rb near their melting points, and quite satisfactory agreement with the simulation data is found for both the details of the SACF and the value of the shear-viscosity coefficient.

  4. Relevance of deterministic chaos theory to studies in functioning of dynamical systems

    NASA Astrophysics Data System (ADS)

    Glagolev, S. N.; Bukhonova, S. M.; Chikina, E. D.

    2018-03-01

    The paper considers chaotic behavior of dynamical systems typical for social and economic processes. Approaches to analysis and evaluation of system development processes are studies from the point of view of controllability and determinateness. Explanations are given for necessity to apply non-standard mathematical tools to explain states of dynamical social and economic systems on the basis of fractal theory. Features of fractal structures, such as non-regularity, self-similarity, dimensionality and fractionality are considered.

  5. Acoustical levitation for space processing. [weightless molten material manipulation

    NASA Technical Reports Server (NTRS)

    Wang, T. G.; Saffren, M. M.; Elleman, D. D.

    1974-01-01

    It is pointed out that many space-manufacturing processes will require the manipulation of weightless molten material within a container in such a way that the material does not touch the container wall. A description is given of an acoustical method which can be used for the positioning and shaping of any molten material including nonconductors such as glasses. The new approach makes use of an acoustical standing wave which is excited within an enclosure or resonator.

  6. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  7. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  8. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  9. Systemic approaches to biodegradation.

    PubMed

    Trigo, Almudena; Valencia, Alfonso; Cases, Ildefonso

    2009-01-01

    Biodegradation, the ability of microorganisms to remove complex chemicals from the environment, is a multifaceted process in which many biotic and abiotic factors are implicated. The recent accumulation of knowledge about the biochemistry and genetics of the biodegradation process, and its categorization and formalization in structured databases, has recently opened the door to systems biology approaches, where the interactions of the involved parts are the main subject of study, and the system is analysed as a whole. The global analysis of the biodegradation metabolic network is beginning to produce knowledge about its structure, behaviour and evolution, such as its free-scale structure or its intrinsic robustness. Moreover, these approaches are also developing into useful tools such as predictors for compounds' degradability or the assisted design of artificial pathways. However, it is the environmental application of high-throughput technologies from the genomics, metagenomics, proteomics and metabolomics that harbours the most promising opportunities to understand the biodegradation process, and at the same time poses tremendous challenges from the data management and data mining point of view.

  10. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.

  11. Interdisciplinary evaluation of dysphagia: clinical swallowing evaluation and videoendoscopy of swallowing.

    PubMed

    Sordi, Marina de; Mourão, Lucia Figueiredo; Silva, Ariovaldo Armando da; Flosi, Luciana Claudia Leite

    2009-01-01

    Patients with dysphagia have impairments in many aspects, and an interdisciplinary approach is fundamental to define diagnosis and treatment. A joint approach in the clinical and videoendoscopy evaluation is paramount. To study the correlation between the clinical assessment (ACD) and the videoendoscopic (VED) assessment of swallowing by classifying the degree of severity and the qualitative/descriptive analyses of the procedures. cross-sectional, descriptive and comparative. held from March to December of 2006, at the Otolaryngology/Dysphagia ward of a hospital in the country side of São Paulo. 30 dysphagic patients with different disorders were assessed by ACD and VED. The data was classified by means of severity scales and qualitative/ descriptive analysis. the correlation between severity ACD and VED scales pointed to a statistically significant low agreement (KAPA = 0.4) (p=0,006). The correlation between the qualitative/descriptive analysis pointed to an excellent and statistically significant agreement (KAPA=0.962) (p<0.001) concerning the entire sample. the low agreement between the severity scales point to a need to perform both procedures, reinforcing VED as a doable procedure. The descriptive qualitative analysis pointed to an excellent agreement, and such data reinforces our need to understand swallowing as a process.

  12. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  13. Sustainable intensification: a multifaceted, systemic approach to international development.

    PubMed

    Himmelstein, Jennifer; Ares, Adrian; van Houweling, Emily

    2016-12-01

    Sustainable intensification (SI) is a term increasingly used to describe a type of approach applied to international agricultural projects. Despite its widespread use, there is still little understanding or knowledge of the various facets of this composite paradigm. A review of the literature has led to the formalization of three principles that convey the current characterization of SI, comprising a whole system, participatory, agroecological approach. Specific examples of potential bottlenecks to the SI approach are cited, in addition to various technologies and techniques that can be applied to overcome these obstacles. Models of similar, succcessful approaches to agricultural development are examined, along with higher level processes. Additionally, this review explores the desired end points of SI and argues for the inclusion of gender and nutrition throughout the process. To properly apply the SI approach, its various aspects need to be understood and adapted to different cultural and geographic situations. New modeling systems and examples of the effective execution of SI strategies can assist with the successful application of the SI paradigm within complex developing communities. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  15. Can we use the q-Gaussian of ambient noise fluctuations as a vulnerability index? A case study in Cultural Heritage buildings

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Koutalonis, Ioannis; Moisidi, Margarita; Chatzopoulos, Georgios

    2018-05-01

    In this work we study in terms of Tsallis statistical mechanics the properties of microtremors' fluctuations in two church bell towers, which are monuments of cultural heritage, in the city of Chania (Crete, Greece). We have shown that fluctuations of ambient vibrations recordings in the Church tower bells follow a q-Gaussian distribution. The behavior of Tsallis q parameter with the level (high) of the measuring point within the tower and the amplification factors at that points as extracted from horizontal-to-vertical (HVSR) spectral ratios are presented and discussed. Since q decreases as the amplification factor increases, we could suggest q as a vulnerability index, where, as q decreases approaching unity, then the structural system is getting more vulnerable. The latter approach suggests that introducing ideas of Tsallis statistics could be useful in characterizing extremely complex processes as that governed the estimation of seismic vulnerability in which a multidisciplinary approach is required.

  16. Global GNSS processing based on the raw observation approach

    NASA Astrophysics Data System (ADS)

    Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten

    2017-04-01

    Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future improvements of the method.

  17. Statistical process control: A feasibility study of the application of time-series measurement in early neurorehabilitation after acquired brain injury.

    PubMed

    Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias

    2017-01-31

    Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.

  18. Interactive contour delineation and refinement in treatment planning of image‐guided radiation therapy

    PubMed Central

    Zhou, Wu

    2014-01-01

    The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846

  19. A Typology for Modeling Processes in Clinical Guidelines and Protocols

    NASA Astrophysics Data System (ADS)

    Tu, Samson W.; Musen, Mark A.

    We analyzed the graphical representations that are used by various guideline-modeling methods to express process information embodied in clinical guidelines and protocols. From this analysis, we distilled four modeling formalisms and the processes they typically model: (1) flowcharts for capturing problem-solving processes, (2) disease-state maps that link decision points in managing patient problems over time, (3) plans that specify sequences of activities that contribute toward a goal, (4) workflow specifications that model care processes in an organization. We characterized the four approaches and showed that each captures some aspect of what a guideline may specify. We believe that a general guideline-modeling system must provide explicit representation for each type of process.

  20. Annual low-cost monitoring of a coastal site in Greece by an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Hoffmeister, Dirk; Bareth, Georg

    2016-04-01

    Coastal areas are under permanent change and are also the result of past processes. These processes are for example sediment transport, accumulation and erosion by normal and extreme waves (storms or tsunamis). As about 23% of the World's population lives within a 100 km distance of coasts, knowledge about coastal processes is important, in particular for possible changes in the nearby future. The past devastating tsunami events demonstrated profoundly the high vulnerability of coastal areas. In order to estimate the different effects, coastal monitoring approaches are of interest. Several different methods exist in order to determine changes in the sedimentary budget and coastline configuration. In order to estimate constant annual changes, we have applied terrestrial laser scanning (TLS) in an annual monitoring approach (2009-2011). In 2014, we changed to an approach based on dense imaging and structure-from-motion, applying an unmanned aerial vehicle (UAV) in order to conduct an annual monitoring of a coastal site in western Greece. Therefore, a GoPro Hero 3+ and a Canon PowerShot S110 mounted on a DJI-Phantom 2 were used. All surveys were conducted in a manually structured image acquisition with a huge overlap. Ground control points (GCP) were measured by tachymetric surveying. This successful approach was repeated again in 2015 with the Canon camera. The measurements of 2014 were controlled by an additional TLS survey, which revealed the high accuracy and more suitable coverage for the UAV-based data. Likewise, the large picture datasets were artificially reduced in order to estimate the most efficient number of images for dense point cloud processing. In addition, also the number of GCPs was decreased for one dataset. Overall, high-resolution digital elevation models with a ground resolution of 10 mm and an equal accuracy were achieved with this low-cost equipment. The data reveals the slight changes on this selected site.

  1. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data

    PubMed Central

    Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-01-01

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565

  2. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data.

    PubMed

    Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-12-23

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).

  3. a Comparitive Study Using Geometric and Vertical Profile Features Derived from Airborne LIDAR for Classifying Tree Genera

    NASA Astrophysics Data System (ADS)

    Ko, C.; Sohn, G.; Remmel, T. K.

    2012-07-01

    We present a comparative study between two different approaches for tree genera classification using descriptors derived from tree geometry and those derived from the vertical profile analysis of LiDAR point data. The different methods provide two perspectives for processing LiDAR point clouds for tree genera identification. The geometric perspective analyzes individual tree crowns in relation to valuable information related to characteristics of clusters and line segments derived within crowns and overall tree shapes to highlight the spatial distribution of LiDAR points within the crown. Conversely, analyzing vertical profiles retrieves information about the point distributions with respect to height percentiles; this perspective emphasizes of the importance that point distributions at specific heights express, accommodating for the decreased point density with respect to depth of canopy penetration by LiDAR pulses. The targeted species include white birch, maple, oak, poplar, white pine and jack pine at a study site northeast of Sault Ste. Marie, Ontario, Canada.

  4. Point process statistics in atom probe tomography.

    PubMed

    Philippe, T; Duguay, S; Grancher, G; Blavette, D

    2013-09-01

    We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  6. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  7. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  8. Finding Out Critical Points For Real-Time Path Planning

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    1989-03-01

    Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.

  9. Preliminary Findings of Inflight Icing Field Test to Support Icing Remote Sensing Technology Assessment

    NASA Technical Reports Server (NTRS)

    King, Michael; Reehorst, Andrew; Serke, Dave

    2015-01-01

    NASA and the National Center for Atmospheric Research have developed an icing remote sensing technology that has demonstrated skill at detecting and classifying icing hazards in a vertical column above an instrumented ground station. This technology has recently been extended to provide volumetric coverage surrounding an airport. Building on the existing vertical pointing system, the new method for providing volumetric coverage will utilize a vertical pointing cloud radar, a multifrequency microwave radiometer with azimuth and elevation pointing, and a NEXRAD radar. The new terminal area icing remote sensing system processes the data streams from these instruments to derive temperature, liquid water content, and cloud droplet size for each examined point in space. These data are then combined to ultimately provide icing hazard classification along defined approach paths into an airport.

  10. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  11. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    PubMed Central

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-01-01

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963

  12. Computing biological functions using BioΨ, a formal description of biological processes based on elementary bricks of actions

    PubMed Central

    Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck

    2010-01-01

    Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138

  13. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  14. Classification of rice grain varieties arranged in scattered and heap fashion using image processing

    NASA Astrophysics Data System (ADS)

    Bhat, Sudhanva; Panat, Sreedath; N, Arunachalam

    2017-03-01

    Inspection and classification of food grains is a manual process in many of the food grain processing industries. Automation of such a process is going to be beneficial for industries facing shortage of skilled workforce. Machine Vision techniques are some of the popular approaches for developing such automations. Most of the existing works on the topic deal with identification of the rice variety by analyzing images of well separated and isolated rice grains from which a lot of geometrical features can be extracted. This paper proposes techniques to estimate geometrical parameters from the images of scattered as well as heaped rice grains where the grain boundaries are not clearly identifiable. A methodology based on convexity is proposed to separate touching rice grains in the scattered rice grain images and get their geometrical parameters. And in case of heaped arrangement a Pixel-Distance Contribution Function is defined and is used to get points inside rice grains and then to find the boundary points of rice grains. These points are fit with the equation of an ellipse to estimate their lengths and breadths. The proposed techniques are applied on images of scattered and heaped rice grains of different varieties. It is shown that each variety gives a unique set of results.

  15. Yield strength mapping in the cross section of ERW pipes considering kinematic hardening and residual stress

    NASA Astrophysics Data System (ADS)

    Kim, Dongwook; Quagliato, Luca; Lee, Wontaek; Kim, Naksoo

    2017-09-01

    In the ERW (electric resistance welding) pipe manufacturing, material properties, process conditions and settings strongly influences the mechanical performances of the final product, as well as they can make them to be not uniform and to change from point to point in the pipe. The present research work proposes an integrated numerical model for the study of the whole ERW process, considering roll forming, welding and sizing stations, allowing to infer the influence of the process parameters on the final quality of the pipe, in terms of final shape and residual stress. The developed numerical model has been initially validated comparing the dimensions of the pipe derived from the simulation results with those of industrial production, proving the reliability of the approach. Afterwards, by varying the process parameters in the numerical simulation, namely the roll speed, the sizing ratio and the friction factor, the influence on the residual stress in the pipe, at the end of the process and after each station, is studied and discussed along the paper.

  16. A study on using pre-forming blank in single point incremental forming process by finite element analysis

    NASA Astrophysics Data System (ADS)

    Abass, K. I.

    2016-11-01

    Single Point Incremental Forming process (SPIF) is a forming technique of sheet material based on layered manufacturing principles. The edges of sheet material are clamped while the forming tool is moved along the tool path. The CNC milling machine is used to manufacturing the product. SPIF involves extensive plastic deformation and the description of the process is more complicated by highly nonlinear boundary conditions, namely contact and frictional effects have been accomplished. However, due to the complex nature of these models, numerical approaches dominated by Finite Element Analysis (FEA) are now in widespread use. The paper presents the data and main results of a study on effect of using preforming blank in SPIF through FEA. The considered SPIF has been studied under certain process conditions referring to the test work piece, tool, etc., applying ANSYS 11. The results show that the simulation model can predict an ideal profile of processing track, the behaviour of contact tool-workpiece, the product accuracy by evaluation its thickness, surface strain and the stress distribution along the deformed blank section during the deformation stages.

  17. Do we need a voxel-based approach for LiDAR data in geomorphology?

    NASA Astrophysics Data System (ADS)

    Székely, Balázs; Dorninger, Peter; Faber, Robert; Nothegger, Clemens

    2010-05-01

    Generations of geomorphologists have developed a multi-faceted approach to model the Earth's (and planetary) surface and the corresponding processes. This set of models is based on data, more specifically on conspicuously increasing amount of data. Obviously, all geomorphologists wish themselves more accurate and increasingly high resolution data on, or related to the Earth surface. This evolution also means that the studied boundary is not anymore a single surface; instead it is considered mostly a 2.5D object, sometimes a real 3D object. LiDAR technology can cope with this challenge: the data accuracy and resolution requirements can be fulfilled by applying this method. Although it is yet somewhat still expensive, more and more areas will be scanned, and in some regions the topographic point clouds are already multitemporal (causing of course other types of processing and evaluation problems). It is rather obvious that for certain, geomorphologically very interesting areas very dense and severalfold multitemporal LiDAR data will be available in the near future. These data sets will have various differences concerning the data density, accuracy, data acquisition technique (conventional or full-waveform), and perhaps most importantly, concerning the actual state of the surface. Similar to the satellite imagery integration problems, soon we all have to face with the LiDAR data integration problem. What type of surface or surfaces can be derived from this multitude of data sources with acceptable ambiguity? What conclusions can be drawn from these data that were originally acquired for various other purposes using various acquisition concepts? Will it be advantageous for geomorphic use to have a coverage of the surface with 100-200 points/m² density? Clearly, these data are, if they are once collected, still too expensive not to be integrated for further analyses. Consequently, we need a data reduction concept that effectively decreases the computer capacity needed to store, process and visualize the results. To reduce the amount of originally collected data for further applications, in general, continuous model surfaces are derived from the point clouds using interpolation approaches. Commonly, grid-based or triangulation models are used for that purpose. Typical models are Digital Surface Models (DSM) representing the whole topography including all natural (e.g. vegetation) and artificial (e.g. buildings) objects and Digital Terrain Models (DTM) representing the topography only. In the visual computing industry the voxel-based approach is quite common for various purposes. Although this technology is quite straightforward concerning the data reduction, it is hardly applied in the geomorphic context. An argument can be against its application that mostly we are interested in a surface, not a volume. Of course in the strict sense it is true, however, if we consider the technology itself how the data, especially the ground data are derived, it turns out that actually it is a volume with a certain accuracy that we are sampling of. The position of this "relatively thin" volume also varies, especially in mountainous areas. Here, depending on the slope angles, the accuracy also varies, especially for the integrated data set consisting of a multitude of sources, e.g. mixing conventional (first echo/last echo) and full-waveform data. These point clouds also contain attributes that otherwise could be very valuable, but during the integration, their meaning may be lost, or it cannot be integrated to the data set. A large scale application of such approaches is mainly prevented by the problems introduced by the high amount of data, making on-the-fly processing a challenging task. To overcome these restrictions and to enable taking advantage of the new possibilities provided by the waveform analysis, we propose a voxel-based data representation approach. The multichannel/multilayer design with an a priori unlimited number of layers enables storing an unlimited number of additional parameter per point. We expect that such a voxel structure enables to represent and analyze huge datasets of large areas (e.g., connected regions which are geologically relevant to be analyzed at once) in applicable processing times in order to bridge the gap between the original point cloud and the user and interpretation level. The challenging task to be solved will be to reduce the amount of data significantly by means of the proposed structure while preserving the content of the original data.

  18. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  19. Enhancing Services to the Rural Elderly through Primary Care Centers.

    ERIC Educational Resources Information Center

    Leighton, Jeannette; Sprague, Patricia

    This paper describes a systematic, coordinated approach to the delivery of health and social services to the rural elderly of Maine provided by the Kennebec Valley Regional Health Agency. Four points of the model are described which distinguish it from other models of coordination: (1) a strong medical orientation in the assessment process; (2)…

  20. SWIFT Intensive Technical Assistance Process. Technical Assistance Brief #1

    ERIC Educational Resources Information Center

    Sailor, Wayne; McCart, Amy; McSheehan, Michael; Mitchiner, Melinda; Quirk, Carol

    2014-01-01

    The national center on Schoolwide Integrated Framework for Transformation (SWIFT Center) is now approaching the halfway point in its first full year of providing intensive technical assistance (TA) to 68 schools in 20 local educational agencies across five states. The purpose of this brief is to provide a thumbnail sketch of how this TA process…

  1. Resource impact factor (RIF) approach to optimal use of energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.R.

    1976-10-01

    A concept called the Resource Impact Factor (RIF) is presented as a means to quantify the social value of energy resources for buildings. The flow of various raw resources from the point of extraction to the building project boundary is shown, and a flow chart indicating the decision making process is given. (PMA)

  2. A/Political Education: A Survey of Quebec Students' Perceptions of Their Citizenship Education

    ERIC Educational Resources Information Center

    Fournier-Sylvester, Nicole

    2014-01-01

    This survey of 370 recent high school graduates reveals that history and citizenship courses in Quebec focus on cultural and religious viewpoints, favour a transmission approach to learning, and fail to connect the political process to students' concerns and interests. Without a clear conception of citizenship as a reference point, this curriculum…

  3. On the Nature of Emotion Regulation

    ERIC Educational Resources Information Center

    Campos, Joseph J.; Frankel, Carl B.; Camras, Linda

    2004-01-01

    This paper presents a unitary approach to emotion and emotion regulation, building on the excellent points in the lead article by Cole, Martin, and Dennis (this issue), as well as the fine commentaries that follow it. It begins by stressing how, in the real world, the processes underlying emotion and emotion regulation appear to be largely one and…

  4. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  5. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  6. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  7. Reconstruction and analysis of hybrid composite shells using meshless methods

    NASA Astrophysics Data System (ADS)

    Bernardo, G. M. S.; Loja, M. A. R.

    2017-06-01

    The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.

  8. 3D sensing for machine guidance in meat cutting applications

    NASA Astrophysics Data System (ADS)

    Daley, Wayne; Britton, Doug; Usher, Colin; Diao, Mamadou; Ruffin, Kevin

    2005-11-01

    Most cutting and deboning operations in meat processing require accurate cuts be made to obtain maximum yield and ensure food safety. This is a significant concern for purveyors of deboned product. This task is made more difficult by the variability that is present in most natural products. The specific application of interest in this paper is the production of deboned poultry breast. This is typically obtained from a cut of the broiler called a 'front half' that includes the breast and the wings. The deboning operation typically consists of a cut that starts at the shoulder joint and then continues along the scapula. Attentive humans with training do a very good job of making this cut. The breast meat is then removed by pulling on the wings. Inaccurate cuts lead to poor yield (amount of boneless meat obtained relative to the weight of the whole carcass) and increase the probability that bone fragments might end up in the product. As equipment designers seek to automate the deboning operation, the cutting task has been a significant obstacle to developing automation that maximizes yield without generating unacceptable levels of bone fragments. The current solution is to sort the bone-in product into different weight ranges and then to adjust the deboning machines to the average of these weight ranges. We propose an approach for obtaining key cut points by extrapolation from external reference points based on the anatomy of the bird. We show that this approach can be implemented using a stereo imaging system, and the accuracy in locating the cut points of interest is significantly improved. This should result in more accurate cuts and with this concomitantly improved yield while reducing the incidence of bones. We also believe the approach could be extended to the processing of other species.

  9. Fabrication of cross-shaped Cu-nanowire resistive memory devices using a rapid, scalable, and designable inorganic-nanowire-digital-alignment technique (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Wentao; Lee, Yeongjun; Min, Sung-Yong; Park, Cheolmin; Lee, Tae-Woo

    2016-09-01

    Resistive random-access memory (RRAM) is a candidate next generation nonvolatile memory due to its high access speed, high density and ease of fabrication. Especially, cross-point-access allows cross-bar arrays that lead to high-density cells in a two-dimensional planar structure. Use of such designs could be compatible with the aggressive scaling down of memory devices, but existing methods such as optical or e-beam lithographic approaches are too complicated. One-dimensional inorganic nanowires (i-NWs) are regarded as ideal components of nanoelectronics to circumvent the limitations of conventional lithographic approaches. However, post-growth alignment of these i-NWs precisely on a large area with individual control is still a difficult challenge. Here, we report a simple, inexpensive, and rapid method to fabricate two-dimensional arrays of perpendicularly-aligned, individually-conductive Cu-NWs with a nanometer-scale CuxO layer sandwiched at each cross point, by using an inorganic-nanowire-digital-alignment technique (INDAT) and a one-step reduction process. In this approach, the oxide layer is self-formed and patterned, so conventional deposition and lithography are not necessary. INDAT eliminates the difficulties of alignment and scalable fabrication that are encountered when using currently-available techniques that use inorganic nanowires. This simple process facilitates fabrication of cross-point nonvolatile memristor arrays. Fabricated arrays had reproducible resistive switching behavior, high on/off current ratio (Ion/Ioff) 10 6 and extensive cycling endurance. This is the first report of memristors with the resistive switching oxide layer self-formed, self-patterned and self-positioned; we envision that the new features of the technique will provide great opportunities for future nano-electronic circuits.

  10. A strongly goal-directed close-range vision system for spacecraft docking

    NASA Technical Reports Server (NTRS)

    Boyer, Kim L.; Goddard, Ralph E.

    1991-01-01

    In this presentation, we will propose a strongly goal-oriented stereo vision system to establish proper docking approach motions for automated rendezvous and capture (AR&C). From an input sequence of stereo video image pairs, the system produces a current best estimate of: contact position; contact vector; contact velocity; and contact orientation. The processing demands imposed by this particular problem and its environment dictate a special case solution; such a system should necessarily be, in some sense, minimalist. By this we mean the system should construct a scene description just sufficiently rich to solve the problem at hand and should do no more processing than is absolutely necessary. In addition, the imaging resolution should be just sufficient. Extracting additional information and constructing higher level scene representations wastes energy and computational resources and injects an unnecessary degree of complexity, increasing the likelihood of malfunction. We therefore take a departure from most prior stereopsis work, including our own, and propose a system based on associative memory. The purpose of the memory is to immediately associate a set of motor commands with a set of input visual patterns in the two cameras. That is, rather than explicitly computing point correspondences and object positions in world coordinates and trying to reason forward from this information to a plan of action, we are trying to capture the essence of reflex behavior through the action of associative memory. The explicit construction of point correspondences and 3D scene descriptions, followed by online velocity and point of impact calculations, is prohibitively expensive from a computational point of view for the problem at hand. Learned patterns on the four image planes, left and right at two discrete but closely spaced instants in time, will be bused directly to infer the spacecraft reaction. This will be a continuing online process as the docking collar approaches.

  11. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  12. Optimization of thermal processing of canned mussels.

    PubMed

    Ansorena, M R; Salvadori, V O

    2011-10-01

    The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.

  13. Point process methods in epidemiology: application to the analysis of human immunodeficiency virus/acquired immunodeficiency syndrome mortality in urban areas.

    PubMed

    Quesada, Jose Antonio; Melchor, Inmaculada; Nolasco, Andreu

    2017-05-26

    The analysis of spatio-temporal patterns of disease or death in urban areas has been developed mainly from the ecological studies approach. These designs may have some limitations like the ecological fallacy and instability with few cases. The objective of this study was to apply the point process methodology, as a complement to that of aggregated data, to study HIV/AIDS mortality in men in the city of Alicante (Spain). A case-control study in residents in the city during the period 2004-2011 was designed. Cases were men who died from HIV/AIDS and controls represented the general population, matched by age to cases. The risk surfaces of death over the city were estimated using the log-risk function of intensities, and we contrasted their temporal variations over the two periods. High risk significant areas of death by HIV/AIDS, which coincide with the most deprived areas in the city, were detected. Significant spatial change of the areas at risk between the periods studied was not detected. The point process methodology is a useful tool to analyse the patterns of death by HIV/AIDS in urban areas.

  14. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  15. Taguchi Experimental Design for Cleaning PWAs with Ball Grid Arrays

    NASA Technical Reports Server (NTRS)

    Bonner, J. K.; Mehta, A.; Walton, S.

    1997-01-01

    Ball grid arrays (BGAs), and other area array packages, are becoming more prominent as a way to increase component pin count while avoiding the manufacturing difficulties inherent in processing quad flat packs (QFPs)...Cleaning printed wiring assemblies (PWAs) with BGA components mounted on the surface is problematic...Currently, a low flash point semi-aqueous material, in conjunction with a batch cleaning unit, is being used to clean PWAs. The approach taken at JPL was to investigate the use of (1) semi-aqueous materials having a high flash point and (2) aqueous cleaning involving a saponifier.

  16. Applying the Tuple Space-Based Approach to the Simulation of the Caspases, an Essential Signalling Pathway.

    PubMed

    Cárdenas-García, Maura; González-Pérez, Pedro Pablo

    2013-03-01

    Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spacesbased in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.

  17. Applying the tuple space-based approach to the simulation of the caspases, an essential signalling pathway.

    PubMed

    Cárdenas-García, Maura; González-Pérez, Pedro Pablo

    2013-04-11

    Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spaces-based in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.

  18. Using ontology network structure in text mining.

    PubMed

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  19. eLaunch Hypersonics: An Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley

    2010-01-01

    This presentation describes a new space launch system that NASA can and should develop. This approach can significantly reduce ground processing and launch costs, improve reliability, and broaden the scope of what we do in near earth orbit. The concept (not new) is to launch a re-usable air-breathing hypersonic vehicle from a ground based electric track. This vehicle launches a final rocket stage at high altitude/velocity for the final leg to orbit. The proposal here differs from past studies in that we will launch above Mach 1.5 (above transonic pinch point) which further improves the efficiency of air breathing, horizontal take-off launch systems. The approach described here significantly reduces cost per kilogram to orbit, increases safety and reliability of the boost systems, and reduces ground costs due to horizontal-processing. Finally, this approach provides significant technology transfer benefits for our national infrastructure.

  20. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  1. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  2. Advanced Decontamination Technologies: High Hydrostatic Pressure on Meat Products

    NASA Astrophysics Data System (ADS)

    Garriga, Margarita; Aymerich, Teresa

    The increasing demand for “natural” foodstuffs, free from chemical additives, and preservatives has triggered novel approaches in food technology developments. In the last decade, practical use of high-pressure processing (HPP) made this emerging non-thermal technology very attractive from a commercial point of view. Despite the fact that the investment is still high, the resulting value-added products, with an extended and safe shelf-life, will fulfil the wishes of consumers who prefer preservative-free minimally processed foods, retaining sensorial characteristics of freshness. Moreover, unlike thermal treatment, pressure treatment is not time/mass dependant, thus reducing the time of processing.

  3. Incorporating DSA in multipatterning semiconductor manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet

    2015-03-01

    Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.

  4. Evaluating the substantive effectiveness of SEA: Towards a better understanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doren, D. van; Driessen, P.P.J., E-mail: p.driessen@uu.nl; Schijf, B.

    Evaluating the substantive effectiveness of strategic environmental assessment (SEA) is vital in order to know to what extent the tool fulfills its purposes and produces expected results. However, the studies that have evaluated the substantive effectiveness of SEA produce varying outcomes as regards the tool's contribution to decision-making and have used a variety of approaches to appraise its effectiveness. The aim of this article is to discuss the theoretical concept of SEA substantive effectiveness and to present a new approach that can be applied for evaluation studies. The SEA effectiveness evaluation framework that will be presented is composed of conceptsmore » of, and approaches to, SEA effectiveness derived from SEA literature and planning theory. Lessons for evaluation can be learned from planning theory in particular, given its long history of analyzing and understanding how sources of information and decisions affect (subsequent) decision-making. Key concepts of this new approach are 'conformance' and 'performance'. In addition, this article presents a systematic overview of process and context factors that can explain SEA effectiveness, derived from SEA literature. To illustrate the practical value of our framework for the assessment and understanding of substantive effectiveness of SEA, three Dutch SEA case studies are examined. The case studies have confirmed the usefulness of the SEA effectiveness assessment framework. The framework proved helpful in order to describe the cumulative influence of the three SEAs on decision-making and the ultimate plan. - Highlights: Black-Right-Pointing-Pointer A new framework to evaluate the substantive effectiveness of SEA is presented. Black-Right-Pointing-Pointer The framework is based on two key concepts: 'conformance' and 'performance.' Black-Right-Pointing-Pointer The practical applicability of the framework is demonstrated by three Dutch cases. Black-Right-Pointing-Pointer The framework allows for a more systematic understanding of SEA effectiveness. Black-Right-Pointing-Pointer Finally, this paper presents explanations for SEA effectiveness.« less

  5. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  6. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  7. Controlling organic chemical hazards in food manufacturing: a hazard analysis critical control points (HACCP) approach.

    PubMed

    Ropkins, K; Beck, A J

    2002-08-01

    Hazard analysis by critical control points (HACCP) is a systematic approach to the identification, assessment and control of hazards. Effective HACCP requires the consideration of all hazards, i.e., chemical, microbiological and physical. However, to-date most 'in-place' HACCP procedures have tended to focus on the control of microbiological and physical food hazards. In general, the chemical component of HACCP procedures is either ignored or limited to applied chemicals, e.g., food additives and pesticides. In this paper we discuss the application of HACCP to a broader range of chemical hazards, using organic chemical contaminants as examples, and the problems that are likely to arise in the food manufacturing sector. Chemical HACCP procedures are likely to result in many of the advantages previously identified for microbiological HACCP procedures: more effective, efficient and economical than conventional end-point-testing methods. However, the high costs of analytical monitoring of chemical contaminants and a limited understanding of formulation and process optimisation as means of controlling chemical contamination of foods are likely to prevent chemical HACCP becoming as effective as microbiological HACCP.

  8. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  9. Biological tooth replacement

    PubMed Central

    Sartaj, Rachel; Sharpe, Paul

    2006-01-01

    Teeth develop from a series of reciprocal interactions that take place between epithelium and mesenchyme during development of the mouth that begin early in mammalian embryogenesis. The molecular control of key processes in tooth development such as initiation, morphogenesis and cytodifferentiation are being increasingly better understood, to the point where this information can be used as the basis for approaches to produce biological replacement teeth (BioTeeth). This review outlines the current approaches, ideas and progress towards the production of BioTeeth that could form an alternative method for replacing lost or damaged teeth. PMID:17005022

  10. Formation Flying With Decentralized Control in Libration Point Orbits

    NASA Technical Reports Server (NTRS)

    Folta, David; Carpenter, J. Russell; Wagner, Christoph

    2000-01-01

    A decentralized control framework is investigated for applicability of formation flying control in libration orbits. The decentralized approach, being non-hierarchical, processes only direct measurement data, in parallel with the other spacecraft. Control is accomplished via linearization about a reference libration orbit with standard control using a Linear Quadratic Regulator (LQR) or the GSFC control algorithm. Both are linearized about the current state estimate as with the extended Kalman filter. Based on this preliminary work, the decentralized approach appears to be feasible for upcoming libration missions using distributed spacecraft.

  11. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  12. Determination of dew point conditions for CO2 with impurities using microfluidics.

    PubMed

    Song, Wen; Fadaei, Hossein; Sinton, David

    2014-03-18

    Impurities can greatly modify the phase behavior of carbon dioxide (CO2), with significant implications on the safety and cost of transport in pipelines. In this paper we demonstrate a microfluidic approach to measure the dew point of such mixtures, specifically the point at which water in supercritical CO2 mixtures condenses to a liquid state. The method enables direct visualization of dew formation (∼ 1-2 μm diameter droplets) at industrially relevant concentrations, pressures, and temperatures. Dew point measurements for the well-studied case of pure CO2-water agreed well with previous theoretical and experimental data over the range of pressure (up to 13.17 MPa), temperature (up to 50 °C), and water content (down to 0.00229 mol fraction) studied. The microfluidic approach showed a nearly 3-fold reduction in error as compared to previous methods. When applied to a mixture with nitrogen (2.5%) and oxygen (5.8%) impurities--typical of flue gas from natural gas oxy-fuel combustion processes--the measured dew point pressure increased on average 17.55 ± 5.4%, indicating a more stringent minimum pressure for pipeline transport. In addition to increased precision, the microfluidic method offers a direct measurement of dew formation, requires very small volumes (∼ 10 μL), and is applicable to ultralow water contents (<0.005 mol fractions), circumventing the limits of previous methods.

  13. Qumquad: a UML-based approach for remodeling of legacy systems in health care.

    PubMed

    Garde, Sebastian; Knaup, Petra; Herold, Ralf

    2003-07-01

    Health care information systems still comprise legacy systems to a certain extent. For reengineering legacy systems a thorough remodeling is inalienable. Current modeling techniques like the Unified Modeling Language (UML) do not offer a systematic and comprehensive process-oriented method for remodeling activities. We developed a systematic method for remodeling legacy systems in health care called Qumquad. Qumquad consists of three major steps: (i) modeling the actual state of the application system, (ii) systematic identification of weak points in this model and (iii) development of a target concept for the reimplementation considering the identified weak points. We applied Qumquad for remodeling a documentation and therapy planning system for pediatric oncology (DOSPO). As a result of our remodeling activities we regained an abstract model of the system, an analysis of the current weak points of DOSPO and possible (partly alternative) solutions to overcome the weak points. Qumquad proved to be very helpful in the reengineering process of DOSPO since we now have at our disposal a comprehensive model for the reimplementation of DOSPO that current users of the system agree on. Qumquad can easily be applied to other reengineering projects in health care.

  14. Classification of building infrastructure and automatic building footprint delineation using airborne laser swath mapping data

    NASA Astrophysics Data System (ADS)

    Caceres, Jhon

    Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin image analysis for obtaining an initial classification, an automatic approach for delineating accurate building footprints is presented. The physical fact that laser pulses that happen to strike building edges can produce very different 1st and last return elevations has been long recognized. However, in older generation ALSM systems (<50 kHz pulse rates) such points were too few and far between to delineate building footprints precisely. Furthermore, without the robust separation of nearby trees and vegetation from the buildings, simply extracting ALSM shots where the elevation of the first return was much higher than the elevation of the last return, was not a reliable means of identifying building footprints. However, with the advent of ALSM systems with pulse rates in excess of 100 kHz, and by using spin-imaged based segmentation, it is now possible to extract building edges from the point cloud. A refined classification resulting from incorporating "on-edge" information is developed for obtaining quadrangular footprints. The footprint fitting process involves line generalization, least squares-based clustering and dominant points finding for segmenting individual building edges. In addition, an algorithm for fitting complex footprints using the segmented edges and data inside footprints is also proposed.

  15. Pilot estimates of glidepath and aim point during simulated landing approaches

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    1981-01-01

    Pilot perceptions of glidepath angle and aim point were measured during simulated landings. A fixed-base cockpit simulator was used with video recordings of simulated landing approaches shown on a video projector. Pilots estimated the magnitudes of approach errors during observation without attempting to make corrections. Pilots estimated glidepath angular errors well, but had difficulty estimating aim-point errors. The data make plausible the hypothesis that pilots are little concerned with aim point during most of an approach, concentrating instead on keeping close to the nominal glidepath and trusting this technique to guide them to the proper touchdown point.

  16. Spatial distribution of child pedestrian injuries along census tract boundaries: Implications for identifying area-based correlates.

    PubMed

    Curtis, Jacqueline W

    2017-01-01

    Census tracts are often used to investigate area-based correlates of a variety of health outcomes. This approach has been shown to be valuable in understanding the ways that health is shaped by place and to design appropriate interventions that account for community-level processes. Following this line of inquiry, it is common in the study of pedestrian injuries to aggregate the point level locations of these injuries to the census tracts in which they occur. Such aggregation enables investigation of the relationships between a range of socioeconomic variables and areas of notably high or low incidence. This study reports on the spatial distribution of child pedestrian injuries in a mid-sized U.S. city over a three-year period. Utilizing a combination of geospatial approaches, Near Analysis, Kernel Density Estimation, and Local Moran's I, enables identification, visualization, and quantification of close proximity between incidents and tract boundaries. Specifically, results reveal that nearly half of the 100 incidents occur within roads that are also census tract boundaries. Results also uncover incidents that occur on tract boundaries, not merely near them. This geographic pattern raises the question of the utility of associating area-based census data from any one tract to the injuries occurring in these border zones. Furthermore, using a standard spatial join technique in a Geographic Information System (GIS), these points located on the border are counted as falling into census tracts on both sides of the boundary, which introduces uncertainty in any subsequent analysis. Therefore, two additional approaches of aggregating points to polygons were tested in this study. Results differ with each approach, but without any alert of such differences to the GIS user. This finding raises a fundamental concern about techniques through which points are aggregated to polygons in any study using point level incidents and their surrounding census tract socioeconomic data to understand health and place. This study concludes with a suggested protocol to test for this source of uncertainty in analysis and an approach that may remove it.

  17. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.; Niethammer, U.

    2017-03-01

    Structure-from-motion (SfM) algorithms greatly facilitate the production of detailed topographic models from photographs collected using unmanned aerial vehicles (UAVs). However, the survey quality achieved in published geomorphological studies is highly variable, and sufficient processing details are never provided to understand fully the causes of variability. To address this, we show how survey quality and consistency can be improved through a deeper consideration of the underlying photogrammetric methods. We demonstrate the sensitivity of digital elevation models (DEMs) to processing settings that have not been discussed in the geomorphological literature, yet are a critical part of survey georeferencing, and are responsible for balancing the contributions of tie and control points. We provide a Monte Carlo approach to enable geomorphologists to (1) carefully consider sources of survey error and hence increase the accuracy of SfM-based DEMs and (2) minimise the associated field effort by robust determination of suitable lower-density deployments of ground control. By identifying appropriate processing settings and highlighting photogrammetric issues such as over-parameterisation during camera self-calibration, processing artefacts are reduced and the spatial variability of error minimised. We demonstrate such DEM improvements with a commonly-used SfM-based software (PhotoScan), which we augment with semi-automated and automated identification of ground control points (GCPs) in images, and apply to two contrasting case studies - an erosion gully survey (Taroudant, Morocco) and an active landslide survey (Super-Sauze, France). In the gully survey, refined processing settings eliminated step-like artefacts of up to 50 mm in amplitude, and overall DEM variability with GCP selection improved from 37 to 16 mm. In the much more challenging landslide case study, our processing halved planimetric error to 0.1 m, effectively doubling the frequency at which changes in landslide velocity could be detected. In both case studies, the Monte Carlo approach provided a robust demonstration that field effort could by substantially reduced by only deploying approximately half the number of GCPs, with minimal effect on the survey quality. To reduce processing artefacts and promote confidence in SfM-based geomorphological surveys, published results should include processing details which include the image residuals for both tie points and GCPs, and ensure that these are considered appropriately within the workflow.

  18. Delamination detection by Multi-Level Wavelet Processing of Continuous Scanning Laser Doppler Vibrometry data

    NASA Astrophysics Data System (ADS)

    Chiariotti, P.; Martarelli, M.; Revel, G. M.

    2017-12-01

    A novel non-destructive testing procedure for delamination detection based on the exploitation of the simultaneous time and spatial sampling provided by Continuous Scanning Laser Doppler Vibrometry (CSLDV) and the feature extraction capability of Multi-Level wavelet-based processing is presented in this paper. The processing procedure consists in a multi-step approach. Once the optimal mother-wavelet is selected as the one maximizing the Energy to Shannon Entropy Ratio criterion among the mother-wavelet space, a pruning operation aiming at identifying the best combination of nodes inside the full-binary tree given by Wavelet Packet Decomposition (WPD) is performed. The pruning algorithm exploits, in double step way, a measure of the randomness of the point pattern distribution on the damage map space with an analysis of the energy concentration of the wavelet coefficients on those nodes provided by the first pruning operation. A combination of the point pattern distributions provided by each node of the ensemble node set from the pruning algorithm allows for setting a Damage Reliability Index associated to the final damage map. The effectiveness of the whole approach is proven on both simulated and real test cases. A sensitivity analysis related to the influence of noise on the CSLDV signal provided to the algorithm is also discussed, showing that the processing developed is robust enough to measurement noise. The method is promising: damages are well identified on different materials and for different damage-structure varieties.

  19. Understanding Semiotic Technology in University Classrooms: A Social Semiotic Approach to PowerPoint-Assisted Cultural Studies Lectures

    ERIC Educational Resources Information Center

    Zhao, Sumin; van Leeuwen, Theo

    2014-01-01

    In this paper, we propose a social semiotic approach to studying PowerPoint in university classrooms. Our approach is centred on two premises: (1) PowerPoint is a semiotic technology that can be integrated into the pedagogical discourse of classrooms, and (2) PowerPoint technology encompasses three interrelated dimensions of social semiotic…

  20. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  1. Representation and display of vector field topology in fluid flow data sets

    NASA Technical Reports Server (NTRS)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  2. Using a multi-scale approach to identify and quantify oil and gas emissions: a case study for GHG emissions verification

    NASA Astrophysics Data System (ADS)

    Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.

    2015-12-01

    Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.

  3. Quantum description of light propagation in generalized media

    NASA Astrophysics Data System (ADS)

    Häyrynen, Teppo; Oksanen, Jani

    2016-02-01

    Linear quantum input-output relation based models are widely applied to describe the light propagation in a lossy medium. The details of the interaction and the associated added noise depend on whether the device is configured to operate as an amplifier or an attenuator. Using the traveling wave (TW) approach, we generalize the linear material model to simultaneously account for both the emission and absorption processes and to have point-wise defined noise field statistics and intensity dependent interaction strengths. Thus, our approach describes the quantum input-output relations of linear media with net attenuation, amplification or transparency without pre-selection of the operation point. The TW approach is then applied to investigate materials at thermal equilibrium, inverted materials, the transparency limit where losses are compensated, and the saturating amplifiers. We also apply the approach to investigate media in nonuniform states which can be e.g. consequences of a temperature gradient over the medium or a position dependent inversion of the amplifier. Furthermore, by using the generalized model we investigate devices with intensity dependent interactions and show how an initial thermal field transforms to a field having coherent statistics due to gain saturation.

  4. Robust interval-based regulation for anaerobic digestion processes.

    PubMed

    Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C

    2005-01-01

    A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.

  5. Topological data analysis of contagion maps for examining spreading processes on networks.

    PubMed

    Taylor, Dane; Klimm, Florian; Harrington, Heather A; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A; Mucha, Peter J

    2015-07-21

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges-for example, due to airline transportation or communication media-allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct 'contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  6. Topological data analysis of contagion maps for examining spreading processes on networks

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-07-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  7. Design and Synthesis of Multigraft Copolymer Thermoplastic Elastomers: Superelastomers

    DOE PAGES

    Wang, Huiqun; Lu, Wei; Wang, Weiyu; ...

    2017-09-28

    Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less

  8. Modular microfluidics for point-of-care protein purifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millet, L. J.; Lucheon, J. D.; Standaert, R. F.

    Biochemical separations are the heart of diagnostic assays and purification methods for biologics. On-chip miniaturization and modularization of separation procedures will enable the development of customized, portable devices for personalized health-care diagnostics and point-of-use production of treatments. In this report, we describe the design and fabrication of miniature ion exchange, size exclusion and affinity chromatography modules for on-chip clean-up of recombinantly-produced proteins. Our results demonstrate that these common separations techniques can be implemented in microfluidic modules with performance comparable to conventional approaches. We introduce embedded 3-D microfluidic interconnects for integrating micro-scale separation modules that can be arranged and reconfigured tomore » suit a variety of fluidic operations or biochemical processes. In conclusion, we demonstrate the utility of the modular approach with a platform for the enrichment of enhanced green fluorescent protein (eGFP) from Escherichia coli lysate through integrated affinity and size-exclusion chromatography modules.« less

  9. Modular microfluidics for point-of-care protein purifications.

    PubMed

    Millet, L J; Lucheon, J D; Standaert, R F; Retterer, S T; Doktycz, M J

    2015-04-21

    Biochemical separations are the heart of diagnostic assays and purification methods for biologics. On-chip miniaturization and modularization of separation procedures will enable the development of customized, portable devices for personalized health-care diagnostics and point-of-use production of treatments. In this report, we describe the design and fabrication of miniature ion exchange, size exclusion and affinity chromatography modules for on-chip clean-up of recombinantly-produced proteins. Our results demonstrate that these common separations techniques can be implemented in microfluidic modules with performance comparable to conventional approaches. We introduce embedded 3-D microfluidic interconnects for integrating micro-scale separation modules that can be arranged and reconfigured to suit a variety of fluidic operations or biochemical processes. We demonstrate the utility of the modular approach with a platform for the enrichment of enhanced green fluorescent protein (eGFP) from Escherichia coli lysate through integrated affinity and size-exclusion chromatography modules.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Huiqun; Lu, Wei; Wang, Weiyu

    Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less

  11. Modular microfluidics for point-of-care protein purifications

    DOE PAGES

    Millet, L. J.; Lucheon, J. D.; Standaert, R. F.; ...

    2015-01-01

    Biochemical separations are the heart of diagnostic assays and purification methods for biologics. On-chip miniaturization and modularization of separation procedures will enable the development of customized, portable devices for personalized health-care diagnostics and point-of-use production of treatments. In this report, we describe the design and fabrication of miniature ion exchange, size exclusion and affinity chromatography modules for on-chip clean-up of recombinantly-produced proteins. Our results demonstrate that these common separations techniques can be implemented in microfluidic modules with performance comparable to conventional approaches. We introduce embedded 3-D microfluidic interconnects for integrating micro-scale separation modules that can be arranged and reconfigured tomore » suit a variety of fluidic operations or biochemical processes. In conclusion, we demonstrate the utility of the modular approach with a platform for the enrichment of enhanced green fluorescent protein (eGFP) from Escherichia coli lysate through integrated affinity and size-exclusion chromatography modules.« less

  12. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  13. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  14. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  15. A Simple Approach to Characterize Gas-Aqueous Liquid Two-phase Flow Configuration Based on Discrete Solid-Liquid Contact Electrification

    PubMed Central

    Choi, Dongwhi; Lee, Donghyeon; Sung Kim, Dong

    2015-01-01

    In this study, we first suggest a simple approach to characterize configuration of gas-aqueous liquid two–phase flow based on discrete solid-liquid contact electrification, which is a newly defined concept as a sequential process of solid-liquid contact and successive detachment of the contact liquid from the solid surface. This approach exhibits several advantages such as simple operation, precise measurement, and cost-effectiveness. By using electric potential that is spontaneously generated by discrete solid–liquid contact electrification, the configurations of the gas-aqueous liquid two-phase flow such as size of a gas slug and flow rate are precisely characterized. According to the experimental and numerical analyses on parameters that affect electric potential, gas slugs have been verified to behave similarly to point electric charges when the measuring point of the electric potential is far enough from the gas slug. In addition, the configuration of the gas-aqueous liquid two-phase microfluidic system with multiple gas slugs is also characterized by using the presented approach. For a proof-of-concept demonstration of using the proposed approach in a self-triggered sensor, a gas slug detector with a counter system is developed to show its practicality and applicability. PMID:26462437

  16. A Simple Approach to Characterize Gas-Aqueous Liquid Two-phase Flow Configuration Based on Discrete Solid-Liquid Contact Electrification.

    PubMed

    Choi, Dongwhi; Lee, Donghyeon; Kim, Dong Sung

    2015-10-14

    In this study, we first suggest a simple approach to characterize configuration of gas-aqueous liquid two-phase flow based on discrete solid-liquid contact electrification, which is a newly defined concept as a sequential process of solid-liquid contact and successive detachment of the contact liquid from the solid surface. This approach exhibits several advantages such as simple operation, precise measurement, and cost-effectiveness. By using electric potential that is spontaneously generated by discrete solid-liquid contact electrification, the configurations of the gas-aqueous liquid two-phase flow such as size of a gas slug and flow rate are precisely characterized. According to the experimental and numerical analyses on parameters that affect electric potential, gas slugs have been verified to behave similarly to point electric charges when the measuring point of the electric potential is far enough from the gas slug. In addition, the configuration of the gas-aqueous liquid two-phase microfluidic system with multiple gas slugs is also characterized by using the presented approach. For a proof-of-concept demonstration of using the proposed approach in a self-triggered sensor, a gas slug detector with a counter system is developed to show its practicality and applicability.

  17. Interdisciplinary Learning for Chemical Engineering Students from Organic Chemistry Synthesis Lab to Reactor Design to Separation

    ERIC Educational Resources Information Center

    Armstrong, Matt; Comitz, Richard L.; Biaglow, Andrew; Lachance, Russ; Sloop, Joseph

    2008-01-01

    A novel approach to the Chemical Engineering curriculum sequence of courses at West Point enabled our students to experience a much more realistic design process, which more closely replicated a real world scenario. Students conduct the synthesis in the organic chemistry lab, then conduct computer modeling of the reaction with ChemCad and…

  18. Non-Lipschitz Dynamics Approach to Discrete Event Systems

    NASA Technical Reports Server (NTRS)

    Zak, M.; Meyers, R.

    1995-01-01

    This paper presents and discusses a mathematical formalism for simulation of discrete event dynamics (DED) - a special type of 'man- made' system designed to aid specific areas of information processing. A main objective is to demonstrate that the mathematical formalism for DED can be based upon the terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.

  19. Access to and Accessibility of Education: An Analytic and Conceptual Approach to a Multidimensional Issue

    ERIC Educational Resources Information Center

    Stauber, Barbara; Parreira do Amaral, Marcelo

    2015-01-01

    This article presents analytical considerations for the discussion of issues of access to education and inequality. It first sharpens the concept of access and inequality by pointing to the interplay of structure and agency as well as to processes of social differentiation in which differences are constructed. This implies a critical view on…

  20. Computational Approaches to Image Understanding.

    DTIC Science & Technology

    1981-10-01

    represnting points, edges, surfaces, and volumes to facilitate display. The geometry or perspective and parailcl (or orthographic) projection has...of making the image forming process explicit. This in turn leads to a concern with geometry , such as the properties f the gradient, stereographic, and...dual spaces. Combining geometry and smoothness leads naturally to multi-variate vector analysis, and to differential geometry . For the most part, a

  1. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  2. Spatial patterns of modern period human-caused fire occurrence in the Missouri Ozark Highlands

    Treesearch

    Jian Yang; Hong S. Healy; Stephen R. Shifley; Eric J. Gustafson

    2007-01-01

    The spatial pattern of forest fire locations is important in the study of the dynamics of fire disturbance. In this article we used a spatial point process modeling approach to quantitatively study the effects of land cover, topography, roads, municipalities, ownership, and population density on fire occurrence reported between 1970 and 2002 in the Missouri Ozark...

  3. Variational algorithms for nonlinear smoothing applications

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.

    1977-01-01

    A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.

  4. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  5. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  6. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  7. Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures

    PubMed Central

    Bayer, Jason D.

    2014-01-01

    We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911

  8. A QRS Detection and R Point Recognition Method for Wearable Single-Lead ECG Devices.

    PubMed

    Chen, Chieh-Li; Chuang, Chun-Te

    2017-08-26

    In the new-generation wearable Electrocardiogram (ECG) system, signal processing with low power consumption is required to transmit data when detecting dangerous rhythms and to record signals when detecting abnormal rhythms. The QRS complex is a combination of three of the graphic deflection seen on a typical ECG. This study proposes a real-time QRS detection and R point recognition method with low computational complexity while maintaining a high accuracy. The enhancement of QRS segments and restraining of P and T waves are carried out by the proposed ECG signal transformation, which also leads to the elimination of baseline wandering. In this study, the QRS fiducial point is determined based on the detected crests and troughs of the transformed signal. Subsequently, the R point can be recognized based on four QRS waveform templates and preliminary heart rhythm classification can be also achieved at the same time. The performance of the proposed approach is demonstrated using the benchmark of the MIT-BIH Arrhythmia Database, where the QRS detected sensitivity (Se) and positive prediction (+P) are 99.82% and 99.81%, respectively. The result reveals the approach's advantage of low computational complexity, as well as the feasibility of the real-time application on a mobile phone and an embedded system.

  9. Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)

    NASA Astrophysics Data System (ADS)

    Benaben, Frederick; Mu, Wenxin; Boissel-Dallier, Nicolas; Barthe-Delanoe, Anne-Marie; Zribi, Sarah; Pingaud, Herve

    2015-08-01

    The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalisation), (ii) deployment of a mediation information system able to computerise the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organisations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).

  10. Experimental and numerical study on optimization of the single point incremental forming of AINSI 304L stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Saidi, B.; Giraud-Moreau, L.; Cherouat, A.; Nasri, R.

    2017-09-01

    AINSI 304L stainless steel sheets are commonly formed into a variety of shapes for applications in the industrial, architectural, transportation and automobile fields, it’s also used for manufacturing of denture base. In the field of dentistry, there is a need for personalized devises that are custom made for the patient. The single point incremental forming process is highly promising in this area for manufacturing of denture base. The single point incremental forming process (ISF) is an emerging process based on the use of a spherical tool, which is moved along CNC controlled tool path. One of the major advantages of this process is the ability to program several punch trajectories on the same machine in order to obtain different shapes. Several applications of this process exist in the medical field for the manufacturing of personalized titanium prosthesis (cranial plate, knee prosthesis...) due to the need of product customization to each patient. The objective of this paper is to study the incremental forming of AISI 304L stainless steel sheets for future applications in the dentistry field. During the incremental forming process, considerable forces can occur. The control of the forming force is particularly important to ensure the safe use of the CNC milling machine and preserve the tooling and machinery. In this paper, the effect of four different process parameters on the maximum force is studied. The proposed approach consists in using an experimental design based on experimental results. An analysis of variance was conducted with ANOVA to find the input parameters allowing to minimize the maximum forming force. A numerical simulation of the incremental forming process is performed with the optimal input process parameters. Numerical results are compared with the experimental ones.

  11. Parallel approach to incorporating face image information into dialogue processing

    NASA Astrophysics Data System (ADS)

    Ren, Fuji

    2000-10-01

    There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.

  12. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  13. A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.

    PubMed

    Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco

    2018-01-01

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.

  14. Optical signal processing techniques and applications of optical phase modulation in high-speed communication systems

    NASA Astrophysics Data System (ADS)

    Deng, Ning

    In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.

  15. Certification of windshear performance with RTCA class D radomes

    NASA Technical Reports Server (NTRS)

    Mathews, Bruce D.; Miller, Fran; Rittenhouse, Kirk; Barnett, Lee; Rowe, William

    1994-01-01

    Superposition testing of detection range performance forms a digital signal for input into a simulation of signal and data processing equipment and algorithms to be employed in a sensor system for advanced warning of hazardous windshear. For suitable pulse-Doppler radar, recording of the digital data at the input to the digital signal processor furnishes a realistic operational scenario and environmentally responsive clutter signal including all sidelobe clutter, ground moving target indications (GMTI), and large signal spurious due to mainbeam clutter and/or RFI respective of the urban airport clutter and aircraft scenarios (approach and landing antenna pointing). For linear radar system processes, a signal at the same point in the process from a hazard phenomena may be calculated from models of the scattering phenomena, for example, as represented in fine 3 dimensional reflectivity and velocity grid structures. Superposition testing furnishes a competing signal environment for detection and warning time performance confirmation of phenomena uncontrollable in a natural environment.

  16. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  17. Identifying the Root Causes of Wait States in Large-Scale Parallel Applications

    DOE PAGES

    Böhme, David; Geimer, Markus; Arnold, Lukas; ...

    2016-07-20

    Driven by growing application requirements and accelerated by current trends in microprocessor design, the number of processor cores on modern supercomputers is increasing from generation to generation. However, load or communication imbalance prevents many codes from taking advantage of the available parallelism, as delays of single processes may spread wait states across the entire machine. Moreover, when employing complex point-to-point communication patterns, wait states may propagate along far-reaching cause-effect chains that are hard to track manually and that complicate an assessment of the actual costs of an imbalance. Building on earlier work by Meira Jr. et al., we present amore » scalable approach that identifies program wait states and attributes their costs in terms of resource waste to their original cause. Ultimately, by replaying event traces in parallel both forward and backward, we can identify the processes and call paths responsible for the most severe imbalances even for runs with hundreds of thousands of processes.« less

  18. Identifying the Root Causes of Wait States in Large-Scale Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Böhme, David; Geimer, Markus; Arnold, Lukas

    Driven by growing application requirements and accelerated by current trends in microprocessor design, the number of processor cores on modern supercomputers is increasing from generation to generation. However, load or communication imbalance prevents many codes from taking advantage of the available parallelism, as delays of single processes may spread wait states across the entire machine. Moreover, when employing complex point-to-point communication patterns, wait states may propagate along far-reaching cause-effect chains that are hard to track manually and that complicate an assessment of the actual costs of an imbalance. Building on earlier work by Meira Jr. et al., we present amore » scalable approach that identifies program wait states and attributes their costs in terms of resource waste to their original cause. Ultimately, by replaying event traces in parallel both forward and backward, we can identify the processes and call paths responsible for the most severe imbalances even for runs with hundreds of thousands of processes.« less

  19. [Comparison of self-controlled retruded approach and bimanual manipulation method on the relationship of incisal point displacement in the mandibular retruded contact position].

    PubMed

    Zhang, Lei; Li, Yun-xia; Kang, Yan-feng; Yang, Guang-ju; Xie, Qiu-fei

    2014-02-18

    To evaluate the relationship of incisal point displacements in the mandibular retruded contact position (RCP) between the self-controlled retruded approach and bimanual manipulation method. Twelve healthy young volunteers were selected. The RCP was guided through the self-controlled retruded approach and bimanual manipulation method. The track of the incisal point was recorded, using the mandibular movement trace recording system. The movement direction of the incisal point in horizontal plane was observed. The distance between the incisal point of the RCP and intercuspal contact position (ICP) was measured. Except one volunteer's incisal point movement direction of the RCP was oblique, others were straight toward posterior. The displacements of the incisal point from the ICP to RCP guided through self-controlled approach and bimanual manipulation method were (1.15 ± 0.64) mm, (0.98 ± 0.29) mm respectively. There was no statistical significance between the two methods (P > 0.05). The displacements of the incisal point from the ICP to RCP guided through self-controlled approach and bimanual manipulation method are approximately the same.

  20. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  1. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  2. Urban tree cover change in Detroit and Atlanta, USA, 1951-2010

    Treesearch

    Krista Merry; Jacek Siry; Pete Bettinger; J.M. Bowker

    2014-01-01

    We assessed tree cover using random points and polygons distributed within the administrative boundaries of Detroit, MI and Atlanta, GA. Two approaches were tested, a point-based approach using 1000 randomly located sample points, and polygon-based approach using 250 circular areas, 200 m in radius (12.56 ha). In the case of Atlanta, both approaches arrived at similar...

  3. Predicting seizures in untreated temporal lobe epilepsy using point-process nonlinear models of heartbeat dynamics.

    PubMed

    Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R

    2016-08-01

    Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.

  4. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  5. A Bayesian cluster analysis method for single-molecule localization microscopy data.

    PubMed

    Griffié, Juliette; Shannon, Michael; Bromley, Claire L; Boelen, Lies; Burn, Garth L; Williamson, David J; Heard, Nicholas A; Cope, Andrew P; Owen, Dylan M; Rubin-Delanchy, Patrick

    2016-12-01

    Cell function is regulated by the spatiotemporal organization of the signaling machinery, and a key facet of this is molecular clustering. Here, we present a protocol for the analysis of clustering in data generated by 2D single-molecule localization microscopy (SMLM)-for example, photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM). Three features of such data can cause standard cluster analysis approaches to be ineffective: (i) the data take the form of a list of points rather than a pixel array; (ii) there is a non-negligible unclustered background density of points that must be accounted for; and (iii) each localization has an associated uncertainty in regard to its position. These issues are overcome using a Bayesian, model-based approach. Many possible cluster configurations are proposed and scored against a generative model, which assumes Gaussian clusters overlaid on a completely spatially random (CSR) background, before every point is scrambled by its localization precision. We present the process of generating simulated and experimental data that are suitable to our algorithm, the analysis itself, and the extraction and interpretation of key cluster descriptors such as the number of clusters, cluster radii and the number of localizations per cluster. Variations in these descriptors can be interpreted as arising from changes in the organization of the cellular nanoarchitecture. The protocol requires no specific programming ability, and the processing time for one data set, typically containing 30 regions of interest, is ∼18 h; user input takes ∼1 h.

  6. Critical Issues and Key Points from the Survey to the Creation of the Historical Building Information Model: the Case of Santo Stefano Basilica

    NASA Astrophysics Data System (ADS)

    Castagnetti, C.; Dubbini, M.; Ricci, P. C.; Rivola, R.; Giannini, M.; Capra, A.

    2017-05-01

    The new era of designing in architecture and civil engineering applications lies in the Building Information Modeling (BIM) approach, based on a 3D geometric model including a 3D database. This is easier for new constructions whereas, when dealing with existing buildings, the creation of the BIM is based on the accurate knowledge of the as-built construction. Such a condition is allowed by a 3D survey, often carried out with laser scanning technology or modern photogrammetry, which are able to guarantee an adequate points cloud in terms of resolution and completeness by balancing both time consuming and costs with respect to the request of final accuracy. The BIM approach for existing buildings and even more for historical buildings is not yet a well known and deeply discussed process. There are still several choices to be addressed in the process from the survey to the model and critical issues to be discussed in the modeling step, particularly when dealing with unconventional elements such as deformed geometries or historical elements. The paper describes a comprehensive workflow that goes through the survey and the modeling, allowing to focus on critical issues and key points to obtain a reliable BIM of an existing monument. The case study employed to illustrate the workflow is the Basilica of St. Stefano in Bologna (Italy), a large monumental complex with great religious, historical and architectural assets.

  7. Open inquiry-based learning experiences: a case study in the context of energy exchange by thermal radiation

    NASA Astrophysics Data System (ADS)

    Pizzolato, Nicola; Fazio, Claudio; Rosario Battaglia, Onofrio

    2014-01-01

    An open inquiry (OI)-based teaching/learning experience, regarding a scientific investigation of the process of energy exchange by thermal radiation, is presented. A sample of upper secondary school physics teachers carried out this experience at the University of Palermo, Italy, in the framework of ESTABLISH, a FP7 European Project aimed at promoting and developing inquiry-based science education. The teachers had the opportunity to personally experience an OI-based learning activity, with the aim of exploring the pedagogical potentialities of this teaching approach to promote both the understanding of difficult concepts and a deeper view of scientific practices. The teachers were firstly engaged in discussions concerning real-life problematic situations, and then stimulated to design and carry out their own laboratory activities, aimed at investigating the process of energy exchange by thermal radiation. A scientific study on the energy exchange between a powered resistor and its surrounding environment, during the heating and cooling processes, was designed and performed. Here we report the phases of this experiment by following the teachers' perspective. A structured interview conducted both before and after the OI experience allowed us to analyze and point out the teachers' feedback from a pedagogical point of view. The advantages and limits of an OI-based approach to promote the development of more student-centred inquiry-oriented teaching strategies are finally discussed.

  8. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  9. Post-image acquisition processing approaches for coherent backscatter validation

    NASA Astrophysics Data System (ADS)

    Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.

    2014-10-01

    Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.

  10. Recommended approaches in the application of ...

    EPA Pesticide Factsheets

    ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r

  11. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  12. Comparative lifecycle assessment of alternatives for waste management in Rio de Janeiro - Investigating the influence of an attributional or consequential approach.

    PubMed

    Bernstad Saraiva, A; Souza, R G; Valle, R A B

    2017-10-01

    The environmental impacts from three management alternatives for organic fraction of municipal solid waste were compared using lifecycle assessment methodology. The alternatives (sanitary landfill, selective collection of organic waste for anaerobic digestion and anaerobic digestion after post-separation of organic waste) were modelled applying an attributional as well as consequential approach, in parallel with the aim of identifying if and how these approaches can affect results and conclusions. The marginal processes identified in the consequential modelling were in general associated with higher environmental impacts than average processes modelled with an attributional approach. As all investigated waste management alternatives result in net-substitution of energy and in some cases also materials, the consequential modelling resulted in lower absolute environmental impacts in five of the seven environmental impact categories assessed in the study. In three of these, the chosen modelling approach can alter the hierarchy between compared waste management alternatives. This indicates a risk of underestimating potential benefits from efficient energy recovery from waste when applying attributional modelling in contexts in which electricity provision historically has been dominated by technologies presenting rather low environmental impacts, but where projections point at increasing impacts from electricity provision in coming years. Thus, in the present case study, the chosen approach affects both absolute and relative results from the comparison. However, results were largely related to the processes identified as affected by investigated changes, and not merely the chosen modelling approach. The processes actually affected by future choices between different waste management alternatives are intrinsically uncertain. The study demonstrates the benefits of applying different assumptions regarding the processes affected by investigated choices - both for provision of energy and materials substituted by waste management processes in consequential LCA modelling, in order to present outcomes that are relevant as decision support within the waste management sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Self-Editing: On the Relation Between behavioral and Psycholinguistic Approaches

    PubMed Central

    Kimberly Epting, L; Critchfield, Thomas S

    2006-01-01

    In Skinner's (1957) conceptual analysis, the process of self-editing is integral to the dynamic complexities of multiply determined verbal behavior, but the analysis has generated little in the way of an experimental analysis. The majority of scientific work on self-editing has taken place within linguistics and cognitive psycholinguistics. Here we compare and contrast behavioral and cognitive psycholinguistic approaches to self-editing, highlighting points of contact that can be identified despite fundamental differences in theoretical styles. We conclude that the two approaches are not mutually exclusive on all dimensions, and suggest that a consideration of cognitive psycholinguistic research may help to spur an experimental analysis of self-editing from a behavioral perspective. PMID:22478464

  14. Variation of the energy release rate as a crack approaches and passes through an elastic inclusion

    NASA Astrophysics Data System (ADS)

    Li, Rongshun; Chudnovsky, A.

    1993-02-01

    The variation of the energy release rate (ERP) at the tip of a crack penetrating an elastic inclusion is analyzed using an approach involving modeling the random array of microcracks or other defects by an elastic inclusion with effective elastic properties. Computations are carried out using a finite element procedure. The eight-noded isoparametric serendipity element with the shift of the midpoint to the quarter-point is used to simulate the singularity at the crack tip, and the crack growth is accommodated by implementing a mesh regeneration technique. The ERP values were calculated for various crack tip positions which simulate the process of the crack approaching and penetrating the inclusion.

  15. Variation of the energy release rate as a crack approaches and passes through an elastic inclusion

    NASA Technical Reports Server (NTRS)

    Li, Rongshun; Chudnovsky, A.

    1993-01-01

    The variation of the energy release rate (ERP) at the tip of a crack penetrating an elastic inclusion is analyzed using an approach involving modeling the random array of microcracks or other defects by an elastic inclusion with effective elastic properties. Computations are carried out using a finite element procedure. The eight-noded isoparametric serendipity element with the shift of the midpoint to the quarter-point is used to simulate the singularity at the crack tip, and the crack growth is accommodated by implementing a mesh regeneration technique. The ERP values were calculated for various crack tip positions which simulate the process of the crack approaching and penetrating the inclusion.

  16. Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System

    NASA Astrophysics Data System (ADS)

    Bendjeghaba, Omar

    2014-01-01

    This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.

  17. [Anthropology, ethnography, and narrative: intersecting paths in understanding the processes of health and sickness].

    PubMed

    Costa, Gabriela M C; Gualda, Dulce M R

    2010-12-01

    The article discusses anthropology, ethnographic method, and narrative as possible ways of coming to know subjects' experiences and the feelings they attribute to them. From an anthropological perspective, the sociocultural universe is taken as a point of reference in understanding the meaning of the processes of health and sickness, using a dense ethnographic description from an interpretivist analytical approach. In this context, narratives afford possible paths to understanding how subjective human experiences are shared and how behavior is organized, with a special focus on meaning, the process by which stories are produced, relations between narrator and other subjects, processes of knowledge, and the manifold ways in which experience can be captured.

  18. Food safety and nutritional quality for the prevention of non communicable diseases: the Nutrient, hazard Analysis and Critical Control Point process (NACCP).

    PubMed

    Di Renzo, Laura; Colica, Carmen; Carraro, Alberto; Cenci Goga, Beniamino; Marsella, Luigi Tonino; Botta, Roberto; Colombo, Maria Laura; Gratteri, Santo; Chang, Ting Fa Margherita; Droli, Maurizio; Sarlo, Francesca; De Lorenzo, Antonino

    2015-04-23

    The important role of food and nutrition in public health is being increasingly recognized as crucial for its potential impact on health-related quality of life and the economy, both at the societal and individual levels. The prevalence of non-communicable diseases calls for a reformulation of our view of food. The Hazard Analysis and Critical Control Point (HACCP) system, first implemented in the EU with the Directive 43/93/CEE, later replaced by Regulation CE 178/2002 and Regulation CE 852/2004, is the internationally agreed approach for food safety control. Our aim is to develop a new procedure for the assessment of the Nutrient, hazard Analysis and Critical Control Point (NACCP) process, for total quality management (TMQ), and optimize nutritional levels. NACCP was based on four general principles: i) guarantee of health maintenance; ii) evaluate and assure the nutritional quality of food and TMQ; iii) give correct information to the consumers; iv) ensure an ethical profit. There are three stages for the application of the NACCP process: 1) application of NACCP for quality principles; 2) application of NACCP for health principals; 3) implementation of the NACCP process. The actions are: 1) identification of nutritional markers, which must remain intact throughout the food supply chain; 2) identification of critical control points which must monitored in order to minimize the likelihood of a reduction in quality; 3) establishment of critical limits to maintain adequate levels of nutrient; 4) establishment, and implementation of effective monitoring procedures of critical control points; 5) establishment of corrective actions; 6) identification of metabolic biomarkers; 7) evaluation of the effects of food intake, through the application of specific clinical trials; 8) establishment of procedures for consumer information; 9) implementation of the Health claim Regulation EU 1924/2006; 10) starting a training program. We calculate the risk assessment as follows: Risk (R) = probability (P) × damage (D). The NACCP process considers the entire food supply chain "from farm to consumer"; in each point of the chain it is necessary implement a tight monitoring in order to guarantee optimal nutritional quality.

  19. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  20. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  1. Co-Registration of Terrestrial and Uav-Based Images - Experimental Results

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Jende, P.

    2016-03-01

    For many applications within urban environments the combined use of images taken from the ground and from unmanned aerial platforms seems interesting: while from the airborne perspective the upper parts of objects including roofs can be observed, the ground images can complement the data from lateral views to retrieve a complete visualisation or 3D reconstruction of interesting areas. The automatic co-registration of air- and ground-based images is still a challenge and cannot be considered solved. The main obstacle is originating from the fact that objects are photographed from quite different angles, and hence state-of-the-art tie point measurement approaches cannot cope with the induced perspective transformation. One first important step towards a solution is to use airborne images taken under slant directions. Those oblique views not only help to connect vertical images and horizontal views but also provide image information from 3D-structures not visible from the other two directions. According to our experience, however, still a good planning and many images taken under different viewing angles are needed to support an automatic matching across all images and complete bundle block adjustment. Nevertheless, the entire process is still quite sensible - the removal of a single image might lead to a completely different or wrong solution, or separation of image blocks. In this paper we analyse the impact different parameters and strategies have on the solution. Those are a) the used tie point matcher, b) the used software for bundle adjustment. Using the data provided in the context of the ISPRS benchmark on multi-platform photogrammetry, we systematically address the mentioned influences. Concerning the tie-point matching we test the standard SIFT point extractor and descriptor, but also the SURF and ASIFT-approaches, the ORB technique, as well as (A)KAZE, which are based on a nonlinear scale space. In terms of pre-processing we analyse the Wallis-filter. Results show that in more challenging situations, in this case for data captured from different platforms at different days most approaches do not perform well. Wallis-filtering emerged to be most helpful especially for the SIFT approach. The commercial software pix4dmapper succeeds in overall bundle adjustment only for some configurations, and especially not for the entire image block provided.

  2. Influence of scattering processes on electron quantum states in nanowires

    PubMed Central

    Galenchik, Vadim; Borzdov, Andrei; Borzdov, Vladimir; Komarov, Fadei

    2007-01-01

    In the framework of quantum perturbation theory the self-consistent method of calculation of electron scattering rates in nanowires with the one-dimensional electron gas in the quantum limit is worked out. The developed method allows both the collisional broadening and the quantum correlations between scattering events to be taken into account. It is an alternativeper seto the Fock approximation for the self-energy approach based on Green’s function formalism. However this approach is free of mathematical difficulties typical to the Fock approximation. Moreover, the developed method is simpler than the Fock approximation from the computational point of view. Using the approximation of stable one-particle quantum states it is proved that the electron scattering processes determine the dependence of electron energy versus its wave vector.

  3. Electronic levels and charge distribution near the interface of nickel

    NASA Technical Reports Server (NTRS)

    Waber, J. T.

    1982-01-01

    The energy levels in clusters of nickel atoms were investigated by means of a series of cluster calculations using both the multiple scattering and computational techniques (designated SSO) which avoids the muffin-tin approximation. The point group symmetry of the cluster has significant effect on the energy of levels nominally not occupied. This influences the electron transfer process during chemisorption. The SSO technique permits the approaching atom or molecule plus a small number of nickel atoms to be treated as a cluster. Specifically, molecular levels become more negative in the O atom, as well as in a CO molecule, as the metal atoms are approached. Thus, electron transfer from the nickel and bond formation is facilitated. This result is of importance in understanding chemisorption and catalytic processes.

  4. Influence of manufacturing parameters on the strength of PLA parts using Layered Manufacturing technique: A statistical approach

    NASA Astrophysics Data System (ADS)

    Jaya Christiyan, K. G.; Chandrasekhar, U.; Mathivanan, N. Rajesh; Venkateswarlu, K.

    2018-02-01

    A 3D printing was successfully used to fabricate samples of Polylactic Acid (PLA). Processing parameters such as Lay-up speed, Lay-up thickness, and printing nozzle were varied. All samples were tested for flexural strength using three point load test. A statistical mathematical model was developed to correlate the processing parameters with flexural strength. The result clearly demonstrated that the lay-up thickness and nozzle diameter influenced flexural strength significantly, whereas lay-up speed hardly influenced the flexural strength.

  5. Status review of PMR polyimides. [Polymerization of Monomer Reactants

    NASA Technical Reports Server (NTRS)

    Serafini, T. T.

    1979-01-01

    In the NASA developed PMR (polymerization of monomer reactants) the reinforcing fibers are impregnated with a solution containing a mixture of monomers dissolved in a low boiling point alkyl alcohol solvent, with the monomers reacting in situ at elevated temperatures to form a thermo-oxidatively stable polyimide matrix. The current status of first and second generation PMR polyimides is reviewed, considering synthesis and properties, processing, and applications. It is concluded that the PMR approach offers various significant advantages, especially superior high temperature properties and processing versatility, to fabricators and users of polyimide/fiber composites.

  6. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    NASA Astrophysics Data System (ADS)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasouli, C.; Abbasi Davani, F.; Rokrok, B.

    Plasma confinement using external magnetic field is one of the successful ways leading to the controlled nuclear fusion. Development and validation of the solution process for plasma equilibrium in the experimental toroidal fusion devices is the main subject of this work. Solution of the nonlinear 2D stationary problem as posed by the Grad-Shafranov equation gives quantitative information about plasma equilibrium inside the vacuum chamber of hot fusion devices. This study suggests solving plasma equilibrium equation which is essential in toroidal nuclear fusion devices, using a mesh-free method in a condition that the plasma boundary is unknown. The Grad-Shafranov equation hasmore » been solved numerically by the point interpolation collocation mesh-free method. Important features of this approach include truly mesh free, simple mathematical relationships between points and acceptable precision in comparison with the parametric results. The calculation process has been done by using the regular and irregular nodal distribution and support domains with different points. The relative error between numerical and analytical solution is discussed for several test examples such as small size Damavand tokamak, ITER-like equilibrium, NSTX-like equilibrium, and typical Spheromak.« less

  8. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  9. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. 25th anniversary article: key points for high-mobility organic field-effect transistors.

    PubMed

    Dong, Huanli; Fu, Xiaolong; Liu, Jie; Wang, Zongrui; Hu, Wenping

    2013-11-20

    Remarkable progress has been made in developing high performance organic field-effect transistors (OFETs) and the mobility of OFETs has been approaching the values of polycrystalline silicon, meeting the requirements of various electronic applications from electronic papers to integrated circuits. In this review, the key points for development of high mobility OFETs are highlighted from aspects of molecular engineering, process engineering and interface engineering. The importance of other factors, such as impurities and testing conditions is also addressed. Finally, the current challenges in this field for practical applications of OFETs are further discussed. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Tailoring point counts for inference about avian density: dealing with nondetection and availability

    USGS Publications Warehouse

    Johnson, Fred A.; Dorazio, Robert M.; Castellón, Traci D.; Martin, Julien; Garcia, Jay O.; Nichols, James D.

    2014-01-01

    Point counts are commonly used for bird surveys, but interpretation is ambiguous unless there is an accounting for the imperfect detection of individuals. We show how repeated point counts, supplemented by observation distances, can account for two aspects of the counting process: (1) detection of birds conditional on being available for observation and (2) the availability of birds for detection given presence. We propose a hierarchical model that permits the radius in which birds are available for detection to vary with forest stand age (or other relevant habitat features), so that the number of birds available at each location is described by a Poisson-gamma mixture. Conditional on availability, the number of birds detected at each location is modeled by a beta-binomial distribution. We fit this model to repeated point count data of Florida scrub-jays and found evidence that the area in which birds were available for detection decreased with increasing stand age. Estimated density was 0.083 (95%CI: 0.060–0.113) scrub-jays/ha. Point counts of birds have a number of appealing features. Based on our findings, however, an accounting for both components of the counting process may be necessary to ensure that abundance estimates are comparable across time and space. Our approach could easily be adapted to other species and habitats.

  12. [Proposal of a costing method for the provision of sterilization in a public hospital].

    PubMed

    Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H

    2011-07-01

    To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  13. Biomarkers and biometric measures of adherence to use of ARV-based vaginal rings.

    PubMed

    Stalter, Randy M; Moench, Thomas R; MacQueen, Kathleen M; Tolley, Elizabeth E; Owen, Derek H

    2016-01-01

    Poor adherence to product use has been observed in recent trials of antiretroviral (ARV)-based oral and vaginal gel HIV prevention products, resulting in an inability to determine product efficacy. The delivery of microbicides through vaginal rings is widely perceived as a way to achieve better adherence but vaginal rings do not eliminate the adherence challenges exhibited in clinical trials. Improved objective measures of adherence are needed as new ARV-based vaginal ring products enter the clinical trial stage. To identify technologies that have potential future application for vaginal ring adherence measurement, a comprehensive literature search was conducted that covered a number of biomedical and public health databases, including PubMed, Embase, POPLINE and the Web of Science. Published patents and patent applications were also searched. Technical experts were also consulted to gather more information and help evaluate identified technologies. Approaches were evaluated as to feasibility of development and clinical trial implementation, cost and technical strength. Numerous approaches were identified through our landscape analysis and classified as either point measures or cumulative measures of vaginal ring adherence. Point measurements are those that give a measure of adherence at a particular point in time. Cumulative measures attempt to measure ring adherence over a period of time. Approaches that require modifications to an existing ring product are at a significant disadvantage, as this will likely introduce additional regulatory barriers to the development process and increase manufacturing costs. From the point of view of clinical trial implementation, desirable attributes would be high acceptance by trial participants, and little or no additional time or training requirements on the part of participants or clinic staff. We have identified four promising approaches as being high priority for further development based on the following measurements: intracellular drug levels, drug levels in hair, the accumulation of a vaginal analyte that diffuses into the ring, and the depletion of an intrinsic ring constituent. While some approaches show significant promise over others, it is recommended that a strategy of using complementary biometric and behavioural approaches be adopted to best understand participants' adherence to ARV-based ring products in clinical trials.

  14. Derived heuristics-based consistent optimization of material flow in a gold processing plant

    NASA Astrophysics Data System (ADS)

    Myburgh, Christie; Deb, Kalyanmoy

    2018-01-01

    Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.

  15. Controlled ion track etching

    NASA Astrophysics Data System (ADS)

    George, J.; Irkens, M.; Neumann, S.; Scherer, U. W.; Srivastava, A.; Sinha, D.; Fink, D.

    2006-03-01

    It is a common practice since long to follow the ion track-etching process in thin foils via conductometry, i.e . by measurement of the electrical current which passes through the etched track, once the track breakthrough condition has been achieved. The major disadvantage of this approach, namely the absence of any major detectable signal before breakthrough, can be avoided by examining the track-etching process capacitively. This method allows one to define precisely not only the breakthrough point before it is reached, but also the length of any non-transient track. Combining both capacitive and conductive etching allows one to control the etching process perfectly. Examples and possible applications are given.

  16. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  17. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  18. Active Segmentation.

    PubMed

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  19. Auditory processing theories of language disorders: past, present, and future.

    PubMed

    Miller, Carol A

    2011-07-01

    The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.

  20. Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas

    2013-08-01

    Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.

  1. On the Application of Different Event-Based Sampling Strategies to the Control of a Simple Industrial Process

    PubMed Central

    Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián

    2009-01-01

    This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975

  2. Quantitative verification of the keyhole concept: a comparison of area of exposure in the parasellar region via supraorbital keyhole, frontotemporal pterional, and supraorbital approaches.

    PubMed

    Cheng, Cheng-Mao; Noguchi, Akio; Dogan, Aclan; Anderson, Gregory J; Hsu, Frank P K; McMenomey, Sean O; Delashaw, Johnny B

    2013-02-01

    This study was designed to determine if the "keyhole concept," proposed by Perneczky's group, can be verified quantitatively. Fourteen (3 bilateral and 8 unilateral) sides of embalmed latex-injected cadaveric heads were dissected via 3 sequential craniotomy approaches: supraorbital keyhole, frontotemporal pterional, and supraorbital. Three-dimensional cartesian coordinates were recorded using a stereotactic localizer. The orthocenter of the ipsilateral anterior clinoid process, the posterior clinoid process, and the contralateral anterior clinoid process are expressed as a center point (the apex). Seven vectors project from the apex to their corresponding target points in a radiating manner on the parasellar skull base. Each 2 neighboring vectors border what could be considered a triangle, and the total area of the 7 triangles sharing the same apex was geometrically expressed as the area of exposure in the parasellar region. Values are expressed as the mean ± SD (mm(2)). The total area of exposure was as follows: supraorbital keyhole 1733.1 ± 336.0, pterional 1699.3 ± 361.9, and supraorbital 1691.4 ± 342.4. The area of exposure on the contralateral side was as follows: supraorbital keyhole 602.2 ± 194.7, pterional 595.2 ± 228.0, and supraorbital 553.3 ± 227.2. The supraorbital keyhole skull flap was 2.0 cm(2), and the skull flap size ratio was 1:5:6.5 (supraorbital keyhole/pterional/supraorbital). The area of exposure of the parasellar region through the smaller supraorbital keyhole approach is as adequate as the larger pterional and supraorbital approaches. The keyhole concept can be verified quantitatively as follows: 1) a wide area of exposure on the skull base can be obtained through a small keyhole skull opening, and 2) the side opposite the opening can also be visualized.

  3. Understanding Metal Sources and Transport Processes in Watersheds: a Hydropedologic Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Bullen, T. D.; Bailey, S. W.; McGuire, K. J.; Brousseau, P.; Ross, D. S.; Bourgault, R.; Zimmer, M. A.

    2010-12-01

    Understanding the origin of metals in watersheds, as well as the transport and cycling processes that affect them is of critical importance to watershed science. Metals can be derived both from weathering of minerals in the watershed soils and bedrock and from atmospheric deposition, and can have highly variable residence times in the watershed due to cycling through plant communities and retention in secondary mineral phases prior to release to drainage waters. Although much has been learned about metal cycling and transport through watersheds using simple “box model” approaches that define unique input, output and processing terms, the fact remains that watersheds are inherently complex and variable in terms of substrate structure, hydrologic flowpaths and the influence of plants, all of which affect the chemical composition of water that ultimately passes through the watershed outlet. In an effort to unravel some of this complexity at a watershed scale, we have initiated an interdisciplinary, hydropedology-focused study of the hydrologic reference watershed (Watershed 3) at the Hubbard Brook Experimental Forest in New Hampshire, USA. This 41 hectare headwater catchment consists of a beech-birch-maple-spruce forest growing on soils developed on granitoid glacial till that mantles Paleozoic metamorphic bedrock. Soils vary from lateral spodosols downslope from bedrock exposures near the watershed crest to vertical and bi-modal spodosols along hillslopes to umbrepts at toe-slope positions and inferred hydrologic pinch points created by bedrock and till structure. Using a variety of chemical and isotope tracers (e.g., K/Na, Ca/Sr, Sr/Ba, Fe/Mn, 87Sr/86Sr, Ca-Sr-Fe stable isotopes) on water, soil and plant samples in an end-member mixing analysis approach, we are attempting to discretize the watershed according to soil types encountered along determined hydrologic flowpaths in order better constrain the various biogeochemical processes that control the delivery of metals to the watershed outlet. Our initial results reveal that along the numerous first-order streams that drain the watershed, chemical and Sr isotope compositions are highly variable from sample point to sample point on a given day and from season to season, reflecting the complex nature of hydrologic flowpaths that deliver water to the streams and hinting at the importance of groundwater seeps that appear to concentrate along the central axis of the watershed.

  4. Terminal Dynamics Approach to Discrete Event Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Meyers, Ronald

    1995-01-01

    This paper presents and discusses a mathematical formalism for simulation of discrete event dynamic (DED)-a special type of 'man-made' systems to serve specific purposes of information processing. The main objective of this work is to demonstrate that the mathematical formalism for DED can be based upon a terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.!.

  5. Musical Probabilities, Abductive Reasoning, and Brain Mechanisms: Extended Perspective of "A Priori" Listening to Music within the Creative Cognition Approach

    ERIC Educational Resources Information Center

    Schmidt, Sebastian; Troge, Thomas A.; Lorrain, Denis

    2013-01-01

    A theory of listening to music is proposed. It suggests that, for listeners, the process of prediction is the starting point to experiencing music. This implies that perception of music starts through both a predisposed and an experience-based extrapolation into the future (this is labeled "a priori" listening). Indications for this…

  6. QuickBird and OrbView-3 Geopositional Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Ross, Kenton

    2006-01-01

    Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.

  7. How Do Students Prove Their Learning and Teachers Their Teaching? Do Teachers Make a Difference?

    ERIC Educational Resources Information Center

    Tanisli, Dilek

    2016-01-01

    Problem Statement: Gaining reasoning skills in early years affects the formal proving skills in the following years, thus it is quite significant. The acquiring of this skill is only possible with the approaches that the teachers used in the process. At this point, the problem to be researched in terms of making proofs is seen in how middle school…

  8. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  9. Anthropomorphic reasoning about neuromorphic AGI safety

    NASA Astrophysics Data System (ADS)

    Jilk, David J.; Herd, Seth J.; Read, Stephen J.; O'Reilly, Randall C.

    2017-11-01

    One candidate approach to creating artificial general intelligence (AGI) is to imitate the essential computations of human cognition. This process is sometimes called 'reverse-engineering the brain' and the end product called 'neuromorphic.' We argue that, unlike with other approaches to AGI, anthropomorphic reasoning about behaviour and safety concerns is appropriate and crucial in a neuromorphic context. Using such reasoning, we offer some initial ideas to make neuromorphic AGI safer. In particular, we explore how basic drives that promote social interaction may be essential to the development of cognitive capabilities as well as serving as a focal point for human-friendly outcomes.

  10. Detection of kinetic change points in piece-wise linear single molecule motion

    NASA Astrophysics Data System (ADS)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  11. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  12. Progress testing in the medical curriculum: students' approaches to learning and perceived stress.

    PubMed

    Chen, Yan; Henning, Marcus; Yielder, Jill; Jones, Rhys; Wearn, Andy; Weller, Jennifer

    2015-09-11

    Progress Tests (PTs) draw on a common question bank to assess all students in a programme against graduate outcomes. Theoretically PTs drive deep approaches to learning and reduce assessment-related stress. In 2013, PTs were introduced to two year groups of medical students (Years 2 and 4), whereas students in Years 3 and 5 were taking traditional high-stakes assessments. Staged introduction of PTs into our medical curriculum provided a time-limited opportunity for a comparative study. The main purpose of the current study was to compare the impact of PTs on undergraduate medical students' approaches to learning and perceived stress with that of traditional high-stakes assessments. We also aimed to investigate the associations between approaches to learning, stress and PT scores. Undergraduate medical students (N = 333 and N = 298 at Time 1 and Time 2 respectively) answered the Revised Study Process Questionnaire (R-SPQ-2F) and the Perceived Stress Scale (PSS) at two time points to evaluate change over time. The R-SPQ-2F generated a surface approach and a deep approach score; the PSS generated an overall perceived stress score. We found no significant differences between the two groups in approaches to learning at either time point, and no significant changes in approaches to learning over time in either cohort. Levels of stress increased significantly at the end of the year (Time 2) for students in the traditional assessment cohort, but not in the PT cohort. In the PT cohort, surface approach to learning, but not stress, was a significant negative predictor of students' PT scores. While confirming an association between surface approaches to learning and lower PT scores, we failed to demonstrate an effect of PTs on approaches to learning. However, a reduction in assessment-associated stress is an important finding.

  13. Comparision of photogrammetric point clouds with BIM building elements for construction progress monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2014-08-01

    For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.

  14. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  15. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    NASA Astrophysics Data System (ADS)

    Riihimaki, L. D.; Comstock, J. M.; Luke, E.; Thorsen, T. J.; Fu, Q.

    2017-07-01

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.

  16. Change-point detection of induced and natural seismicity

    NASA Astrophysics Data System (ADS)

    Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.

    2016-12-01

    Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.

  17. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  18. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  19. Dynamics of Entropy in Quantum-like Model of Decision Making

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei; Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu

    2011-03-01

    We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices. By using this equilibrium point Alice determines her mixed (i.e., probabilistic) strategy with respect to Bob. Thus our model is a model of thinking through decoherence of initially pure mental state. Decoherence is induced by interaction with memory and external environment. In this paper we study (numerically) dynamics of quantum entropy of Alice's state in the process of decision making. Our analysis demonstrates that this dynamics depends nontrivially on the initial state of Alice's mind on her own actions and her prediction state (for possible actions of Bob.)

  20. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

Top