CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Network of dedicated processors for finding lowest-cost map path
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1991-01-01
A method and associated apparatus are disclosed for finding the lowest cost path of several variable paths. The paths are comprised of a plurality of linked cost-incurring areas existing between an origin point and a destination point. The method comprises the steps of connecting a purality of nodes together in the manner of the cost-incurring areas; programming each node to have a cost associated therewith corresponding to one of the cost-incurring areas; injecting a signal into one of the nodes representing the origin point; propagating the signal through the plurality of nodes from inputs to outputs; reducing the signal in magnitude at each node as a function of the respective cost of the node; and, starting at one of the nodes representing the destination point and following a path having the least reduction in magnitude of the signal from node to node back to one of the nodes representing the origin point whereby the lowest cost path from the origin point to the destination point is found.
Method and apparatus for automatically detecting patterns in digital point-ordered signals
Brudnoy, David M.
1998-01-01
The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output.
Method and apparatus for automatically detecting patterns in digital point-ordered signals
Brudnoy, D.M.
1998-10-20
The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output. 14 figs.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Lu, Zhen; McKellop, Harry A
2014-03-01
This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Robotics virtual rail system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID
2011-07-05
A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.
Can Detectability Analysis Improve the Utility of Point Counts for Temperate Forest Raptors?
Temperate forest breeding raptors are poorly represented in typical point count surveys because these birds are cryptic and typically breed at low densities. In recent years, many new methods for estimating detectability during point counts have been developed, including distanc...
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
Study on high-resolution representation of terraces in Shanxi Loess Plateau area
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ma, Lei
2008-10-01
A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.
Video shot boundary detection using region-growing-based watershed method
NASA Astrophysics Data System (ADS)
Wang, Jinsong; Patel, Nilesh; Grosky, William
2004-10-01
In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.
Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob
2007-01-01
For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.
[A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].
Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y
2016-04-18
To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Method for contour extraction for object representation
Skourikhine, Alexei N.; Prasad, Lakshman
2005-08-30
Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.
Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure
NASA Astrophysics Data System (ADS)
Pestrenin, V. M.; Pestrenina, I. V.
2017-03-01
The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham
This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.
Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data
NASA Astrophysics Data System (ADS)
Xiao, P.; Kelly, M.; Guo, Q.
2014-12-01
This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.
Fractal Clustering and Knowledge-driven Validation Assessment for Gene Expression Profiling.
Wang, Lu-Yong; Balasubramanian, Ammaiappan; Chakraborty, Amit; Comaniciu, Dorin
2005-01-01
DNA microarray experiments generate a substantial amount of information about the global gene expression. Gene expression profiles can be represented as points in multi-dimensional space. It is essential to identify relevant groups of genes in biomedical research. Clustering is helpful in pattern recognition in gene expression profiles. A number of clustering techniques have been introduced. However, these traditional methods mainly utilize shape-based assumption or some distance metric to cluster the points in multi-dimension linear Euclidean space. Their results shows poor consistence with the functional annotation of genes in previous validation study. From a novel different perspective, we propose fractal clustering method to cluster genes using intrinsic (fractal) dimension from modern geometry. This method clusters points in such a way that points in the same clusters are more self-affine among themselves than to the points in other clusters. We assess this method using annotation-based validation assessment for gene clusters. It shows that this method is superior in identifying functional related gene groups than other traditional methods.
Waypoints Following Guidance for Surface-to-Surface Missiles
NASA Astrophysics Data System (ADS)
Zhou, Hao; Khalil, Elsayed M.; Rahman, Tawfiqur; Chen, Wanchun
2018-04-01
The paper proposes waypoints following guidance law. In this method an optimal trajectory is first generated which is then represented through a set of waypoints that are distributed from the starting point up to the final target point using a polynomial. The guidance system then works by issuing guidance command needed to move from one waypoint to the next one. Here the method is applied for a surface-to-surface missile. The results show that the method is feasible for on-board application.
Structural-Thermal-Optical Program (STOP)
NASA Technical Reports Server (NTRS)
Lee, H. P.
1972-01-01
A structural thermal optical computer program is developed which uses a finite element approach and applies the Ritz method for solving heat transfer problems. Temperatures are represented at the vertices of each element and the displacements which yield deformations at any point of the heated surface are interpolated through grid points.
Dental scanning in CAD/CAM technologies: laser beams
NASA Astrophysics Data System (ADS)
Sinescu, Cosmin; Negrutiu, Meda; Faur, Nicolae; Negru, Radu; Romînu, Mihai; Cozarov, Dalibor
2008-02-01
Scanning, also called digitizing, is the process of gathering the requisite data from an object. Many different technologies are used to collect three dimensional data. They range from mechanical and very slow, to radiation-based and highly-automated. Each technology has its advantages and disadvantages, and their applications and specifications overlap. The aims of this study are represented by establishing a viable method of digitally representing artifacts of dental casts, proposing a suitable scanner and post-processing software and obtaining 3D Models for the dental applications. The method is represented by the scanning procedure made by different scanners as the implicated materials. Scanners are the medium of data capture. 3D scanners aim to measure and record the relative distance between the object's surface and a known point in space. This geometric data is represented in the form of point cloud data. The contact and no contact scanners were presented. The results show that contact scanning procedures uses a touch probe to record the relative position of points on the objects' surface. This procedure is commonly used in Reverse engineering applications. Its merits are represented by efficiency for objects with low geometric surface detail. Disadvantages are represented by time consuming, this procedure being impractical for artifacts digitization. The non contact scanning procedure implies laser scanning (laser triangulation technology) and photogrammetry. As a conclusion it can be drawn that different types of dental structure needs different types of scanning procedures in order to obtain a competitive complex 3D virtual model that can be used in CAD/CAM technologies.
NASA Astrophysics Data System (ADS)
Brokešová, Johana; Málek, Jiří
2018-07-01
A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.
PDEs on moving surfaces via the closest point method and a modified grid based particle method
NASA Astrophysics Data System (ADS)
Petras, A.; Ruuth, S. J.
2016-05-01
Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.
[Proposal of a costing method for the provision of sterilization in a public hospital].
Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H
2011-07-01
To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Healy, R.W.; Russell, T.F.
1992-01-01
A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.
A Direction Finding Method with A 3-D Array Based on Aperture Synthesis
NASA Astrophysics Data System (ADS)
Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng
2018-01-01
Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Practical Method to Identify Orbital Anomaly as Breakup Event in the Geostationary Region
2015-01-14
point ! Geocentric distance at the pinch point Table 4 summarizes the results of the origin identifications. One object labeled x15300 was...Table 4. The result of origin identification of the seven detected objects Object name Parent object Inclination vector Pinch point Geocentric distance...of the object. X-Y, X’-Y’, and R.A.-Dec. represent the Image Coordinate before rotating the CCD sensor, after rotation, and the Geocentric Inertial
"The Effect of Alternative Representations of Lake ...
Lakes can play a significant role in regional climate, modulating inland extremes in temperature and enhancing precipitation. Representing these effects becomes more important as regional climate modeling (RCM) efforts focus on simulating smaller scales. When using the Weather Research and Forecasting (WRF) model to downscale future global climate model (GCM) projections into RCM simulations, model users typically must rely on the GCM to represent temperatures at all water points. However, GCMs have insufficient resolution to adequately represent even large inland lakes, such as the Great Lakes. Some interpolation methods, such as setting lake surface temperatures (LSTs) equal to the nearest water point, can result in inland lake temperatures being set from sea surface temperatures (SSTs) that are hundreds of km away. In other cases, a single point is tasked with representing multiple large, heterogeneous lakes. Similar consequences can result from interpolating ice from GCMs to inland lake points, resulting in lakes as large as Lake Superior freezing completely in the space of a single timestep. The use of a computationally-efficient inland lake model can improve RCM simulations where the input data is too coarse to adequately represent inland lake temperatures and ice (Gula and Peltier 2012). This study examines three scenarios under which ice and LSTs can be set within the WRF model when applied as an RCM to produce 2-year simulations at 12 km gri
Incompressible material point method for free surface flow
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Xiong; Sze, Kam Yim; Lian, Yanping; Liu, Yan
2017-02-01
To overcome the shortcomings of the weakly compressible material point method (WCMPM) for modeling the free surface flow problems, an incompressible material point method (iMPM) is proposed based on operator splitting technique which splits the solution of momentum equation into two steps. An intermediate velocity field is first obtained by solving the momentum equations ignoring the pressure gradient term, and then the intermediate velocity field is corrected by the pressure term to obtain a divergence-free velocity field. A level set function which represents the signed distance to free surface is used to track the free surface and apply the pressure boundary conditions. Moreover, an hourglass damping is introduced to suppress the spurious velocity modes which are caused by the discretization of the cell center velocity divergence from the grid vertexes velocities when solving pressure Poisson equations. Numerical examples including dam break, oscillation of a cubic liquid drop and a droplet impact into deep pool show that the proposed incompressible material point method is much more accurate and efficient than the weakly compressible material point method in solving free surface flow problems.
Bayesian methods for uncertainty factor application for derivation of reference values.
Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B
2016-10-01
In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Solving multi-objective optimization problems in conservation with the reference point method
Dujardin, Yann; Chadès, Iadine
2018-01-01
Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.
Point-to-point connectivity prediction in porous media using percolation theory
NASA Astrophysics Data System (ADS)
Tavagh-Mohammadi, Behnam; Masihi, Mohsen; Ganjeh-Ghazvini, Mostafa
2016-10-01
The connectivity between two points in porous media is important for evaluating hydrocarbon recovery in underground reservoirs or toxic migration in waste disposal. For example, the connectivity between a producer and an injector in a hydrocarbon reservoir impact the fluid dispersion throughout the system. The conventional approach, flow simulation, is computationally very expensive and time consuming. Alternative method employs percolation theory. Classical percolation approach investigates the connectivity between two lines (representing the wells) in 2D cross sectional models whereas we look for the connectivity between two points (representing the wells) in 2D aerial models. In this study, site percolation is used to determine the fraction of permeable regions connected between two cells at various occupancy probabilities and system sizes. The master curves of mean connectivity and its uncertainty are then generated by finite size scaling. The results help to predict well-to-well connectivity without need to any further simulation.
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alkhatib, H; Oves, S
Purpose: To demonstrate a quick and comprehensive method verifying the accuracy of the updated dose model by recalculating dose distribution in an anthropomorphic phantom with a new version of the TPS and comparing the results to measured values. Methods: CT images and IMRT plan of an RPC anthropomorphic head phantom, previously calculated by Pinnacle 9.0, was re-computed using Pinnacle 9.2 and 9.6. The dosimeters within the phantom include four TLD capsules representing a primary PTV, two TLD capsules representing a secondary PTV, and two TLD capsules representing an organ at risk. Also included were three sheets of Gafchromic film. Performancemore » of the updated TPS version was assessed by recalculating point doses and dose profiles corresponding to TLD and film position respectively and then comparing the results to reported values by the RPC. Results: Comparing calculated doses to reported measured doses from the RPC yielded an average disagreement of 1.48%, 2.04% and 2.10% for versions 9.0, 9.2, 9.6 respectively. Computed doses points all meet the RPC's passing criteria with the exception of the point representing the superior organ at risk in version 9.6. However, qualitative analysis of the recalculated dose profiles showed improved agreement with those of the RPC, especially in the penumbra region. Conclusion: This work has demonstrated the calculation results of Pinnacle 9.2 and 9.6 vs 9.0 version. Additionally, this study illustrates a method for the user to gain confidence upgrade to a newer version of the treatment planning system.« less
Representing ductile damage with the dual domain material point method
Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...
2015-12-14
In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less
NASA Astrophysics Data System (ADS)
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
Can Regulatory Bodies Expect Efficient Help from Formal Methods?
NASA Technical Reports Server (NTRS)
Lopez Ruiz, Eduardo R.; Lemoine, Michel
2010-01-01
In the context of EDEMOI - a French national project that proposed the use of semiformal and formal methods to infer the consistency and robustness of aeronautical regulations through the analysis of faithfully representative models- a methodology had been suggested (and applied) to different (safety and security-related) aeronautical regulations. This paper summarizes the preliminary results of this experience by stating which were the methodology s expected benefits, from a scientific point of view, and which are its useful benefits, from a regulatory body s point of view.
A method for improved accuracy in three dimensions for determining wheel/rail contact points
NASA Astrophysics Data System (ADS)
Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang
2015-11-01
Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.
Occupancy Grid Map Merging Using Feature Maps
2010-11-01
each robot begins exploring at different starting points, once two robots can communicate, they send their odometry data, LIDAR observations, and maps...robots [11]. Moreover, it is relevant to mention that significant success has been achieved in solving SLAM problems when using hybrid maps [12...represents the environment by parametric features. Our method is capable of representing a LIDAR scanned environment map in a parametric fashion. In general
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
Assessing Argumentative Representation with Bayesian Network Models in Debatable Social Issues
ERIC Educational Resources Information Center
Zhang, Zhidong; Lu, Jingyan
2014-01-01
This study seeks to obtain argumentation models, which represent argumentative processes and an assessment structure in secondary school debatable issues in the social sciences. The argumentation model was developed based on mixed methods, a combination of both theory-driven and data-driven methods. The coding system provided a combing point by…
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
Description of waves in inhomogeneous domains using Heun's equation
NASA Astrophysics Data System (ADS)
Bednarik, M.; Cervenka, M.
2018-04-01
There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.
Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples
NASA Astrophysics Data System (ADS)
Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.
2014-12-01
Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Nguyen, H P; Le, D L; Tran, Q M; Nguyen, V T; Nguyen, N O
1995-01-01
Massage and Acupression have a history of many years of use by the Vietnamese people in the treatment of diseases, and they can give wonderful therapeutic effects in painful syndromes and chronic diseases, etc. On the other hand, some methods of Chrono-Acupuncture based on chronobiological theory and the holistic concept of traditional medicine are studied and applied in clinical applications. This paper presents the therapy advice system based on Chrono-Massage and Acupression using the method of ZiWuLiuZhu called CHROMASSI. The system includes four major parts. Massage and Acupression Teaching: This part can provide the user with some background in Massage and Acupression theory such as the pathology of the meridians, the classification of points and their function, the therapeutic properties of points, the methods of Massage and Acupression (including Pression, Friction, Rubbing, Light Massage, Petrissage, Rolling and Rubbing, Percussion and Vibration), and the direction of the meridians circulation, displaying AcuPoints represented by color pictures of the 12 main meridians and 2 vessels. More than 330 popular AcuPoints are used in the system. Open AcuPoint Calculating: This module can help us to calculate open AcuPoints based on data about days, months, years and hours using the special method of ZiWuLiuZhu. The Points adopted by ZiWuLiuZhu are the Five Shu Points and Source Points including 66 points (all of them are located below the elbows and knees). The effectiveness of these points becomes particularly evident when they are needled or punctured at optimum time intervals. For example, at 9:00 a.m., September 22, 1994, the open Points by the ZiWuLiuZhu method will be the points K2 (Nhien Coc) and K10 (Am Coc). According to the chronotherapeutic method, first we have to pressure (or puncture) the above points in order to attain the sensation RDac KhiS (arrival of energy), then pressure the other treating points as in ordinary Massage and Acupression. Therapy Consultation: Knowledge of the system was provided by Prof. Nguyen Van Thang and Doctor Nguyen Nhu Oanh at the Vietnam National Institute of Oriental Medicine. CHROMASSI is able to advise on ways to treat about 153 diseases and symptoms in the following fields: Aches and Pains, Insomnia, Common Cold and Influenza, Sexual Disturbances, Medical Aesthetics in Face, Breast and Buttock, Hygiene, Cardio-Vascular Tract, Digestive Tract, Urinary Tract, Respiratory Tract, Genital Tract, Ear-Nose-Throat Tract, Nervous Tract. The system can provide information about Remarks, Acupoints formulas for treating by Massage and Acupression with colour pictures of meridians. Explanation: The CHROMASSI system can explain why the AcuPoints are used for treating diseases based on the theoretical bases of traditional Vietnamese medicine and on the meridians and collaterals system theory. The colour pictures representing the circulation of vital energy in the meridians are used for explanation. The CHROMASSI system was developed in TURBO-PROLOG and TURBO-PASCAL and can run on IBM PC/AT computers and compatibles. The system can be used for teaching and for clinics of Massage and Acupression combined with Chronotherapeutics. At present the system is used by some physicians for clinical applications. The first results indicate that, in 20 cases of generalized headache compared with the control group, the combining of chronoacupression using the ZiWuLiuZhu method and ordinary Massage and Acupression gave better effects than that obtained by either method alone.
The Voronoi Implicit Interface Method for computing multiphase physics.
Saye, Robert I; Sethian, James A
2011-12-06
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Synthesis of Biofluidic Microsystems (SYNBIOSYS)
2007-10-01
reaction system. 58 FIGURE 41. The micro reactor is represented by a PFR network model. The calculation of reaction and convection is conducted in...one column of PFRs and the calculation of diffusional mixing is conducted between two columns of PFRs. 59 FIGURE 42. Apply the numerical method of...lines to calculate the diffusion in the channel width direction. Here, we take 10 discretized concentration points in the channel: ci1 - ci10. Points
Implicit Shape Models for Object Detection in 3d Point Clouds
NASA Astrophysics Data System (ADS)
Velizhev, A.; Shapovalov, R.; Schindler, K.
2012-07-01
We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.
Uncertainty Quantification of Water Quality in Tamsui River in Taiwan
NASA Astrophysics Data System (ADS)
Kao, D.; Tsai, C.
2017-12-01
In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.
Recruitment for Occupational Research: Using Injured Workers as the Point of Entry into Workplaces
Koehoorn, Mieke; Trask, Catherine M.; Teschke, Kay
2013-01-01
Objective To investigate the feasibility, costs and sample representativeness of a recruitment method that used workers with back injuries as the point of entry into diverse working environments. Methods Workers' compensation claims were used to randomly sample workers from five heavy industries and to recruit their employers for ergonomic assessments of the injured worker and up to 2 co-workers. Results The final study sample included 54 workers from the workers’ compensation registry and 72 co-workers. This sample of 126 workers was based on an initial random sample of 822 workers with a compensation claim, or a ratio of 1 recruited worker to approximately 7 sampled workers. The average recruitment cost was CND$262/injured worker and CND$240/participating worksite including co-workers. The sample was representative of the heavy industry workforce, and was successful in recruiting the self-employed (8.2%), workers from small employers (<20 workers, 38.7%), and workers from diverse working environments (49 worksites, 29 worksite types, and 51 occupations). Conclusions The recruitment rate was low but the cost per participant reasonable and the sample representative of workers in small worksites. Small worksites represent a significant portion of the workforce but are typically underrepresented in occupational research despite having distinct working conditions, exposures and health risks worthy of investigation. PMID:23826387
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Graph-based geometric-iconic guide-wire tracking.
Honnorat, Nicolas; Vaillant, Régis; Paragios, Nikos
2011-01-01
In this paper we introduce a novel hybrid graph-based approach for Guide-wire tracking. The image support is captured by steerable filters and improved through tensor voting. Then, a graphical model is considered that represents guide-wire extraction/tracking through a B-spline control-point model. Points with strong geometric interest (landmarks) are automatically determined and anchored to such a representation. Tracking is then performed through discrete MRFs that optimize the spatio-temporal positions of the control points while establishing landmark temporal correspondences. Promising results demonstrate the potentials of our method.
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
Liu, Shiwei; Wu, Xiaoling; Lopez, Alan D; Wang, Lijun; Cai, Yue; Page, Andrew; Yin, Peng; Liu, Yunning; Li, Yichong; Liu, Jiangmei; You, Jinling; Zhou, Maigeng
2016-01-01
In China, sample-based mortality surveillance systems, such as the Chinese Center for Disease Control and Prevention's disease surveillance points system and the Ministry of Health's vital registration system, have been used for decades to provide nationally representative data on health status for health-care decision-making and performance evaluation. However, neither system provided representative mortality and cause-of-death data at the provincial level to inform regional health service needs and policy priorities. Moreover, the systems overlapped to a considerable extent, thereby entailing a duplication of effort. In 2013, the Chinese Government combined these two systems into an integrated national mortality surveillance system to provide a provincially representative picture of total and cause-specific mortality and to accelerate the development of a comprehensive vital registration and mortality surveillance system for the whole country. This new system increased the surveillance population from 6 to 24% of the Chinese population. The number of surveillance points, each of which covered a district or county, increased from 161 to 605. To ensure representativeness at the provincial level, the 605 surveillance points were selected to cover China's 31 provinces using an iterative method involving multistage stratification that took into account the sociodemographic characteristics of the population. This paper describes the development and operation of the new national mortality surveillance system, which is expected to yield representative provincial estimates of mortality in China for the first time.
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-01-01
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-11-21
In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
Description of Panel Method Code ANTARES
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; George, Mike (Technical Monitor)
2000-01-01
Panel method code ANTARES was developed to compute wall interference corrections in a rectangular wind tunnel. The code uses point doublets to represent blockage effects and line doublets to represent lifting effects of a wind tunnel model. Subsonic compressibility effects are modeled by applying the Prandtl-Glauert transformation. The closed wall, open jet, or perforated wall boundary condition may be assigned to a wall panel centroid. The tunnel walls can be represented by using up to 8000 panels. The accuracy of panel method code ANTARES was successfully investigated by comparing solutions for the closed wall and open jet boundary condition with corresponding Method of Images solutions. Fourier transform solutions of a two-dimensional wind tunnel flow field were used to check the application of the perforated wall boundary condition. Studies showed that the accuracy of panel method code ANTARES can be improved by increasing the total number of wall panels in the circumferential direction. It was also shown that the accuracy decreases with increasing free-stream Mach number of the wind tunnel flow field.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
The Topology of Three-Dimensional Symmetric Tensor Fields
NASA Technical Reports Server (NTRS)
Lavin, Yingmei; Levy, Yuval; Hesselink, Lambertus
1994-01-01
We study the topology of 3-D symmetric tensor fields. The goal is to represent their complex structure by a simple set of carefully chosen points and lines analogous to vector field topology. The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. First, we introduce a new method for locating 3-D degenerate points. We then extract the topological skeletons of the eigenvector fields and use them for a compact, comprehensive description of the tensor field. Finally, we demonstrate the use of tensor field topology for the interpretation of the two-force Boussinesq problem.
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
NASA Astrophysics Data System (ADS)
Chen, Jinlei; Wen, Jun; Tian, Hui
2016-02-01
Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
2D modeling of direct laser metal deposition process using a finite particle method
NASA Astrophysics Data System (ADS)
Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.
2018-05-01
Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.
Reproducibility of dynamically represented acoustic lung images from healthy individuals
Maher, T M; Gat, M; Allen, D; Devaraj, A; Wells, A U; Geddes, D M
2008-01-01
Background and aim: Acoustic lung imaging offers a unique method for visualising the lung. This study was designed to demonstrate reproducibility of acoustic lung images recorded from healthy individuals at different time points and to assess intra- and inter-rater agreement in the assessment of dynamically represented acoustic lung images. Methods: Recordings from 29 healthy volunteers were made on three separate occasions using vibration response imaging. Reproducibility was measured using quantitative, computerised assessment of vibration energy. Dynamically represented acoustic lung images were scored by six blinded raters. Results: Quantitative measurement of acoustic recordings was highly reproducible with an intraclass correlation score of 0.86 (very good agreement). Intraclass correlations for inter-rater agreement and reproducibility were 0.61 (good agreement) and 0.86 (very good agreement), respectively. There was no significant difference found between the six raters at any time point. Raters ranged from 88% to 95% in their ability to identically evaluate the different features of the same image presented to them blinded on two separate occasions. Conclusion: Acoustic lung imaging is reproducible in healthy individuals. Graphic representation of lung images can be interpreted with a high degree of accuracy by the same and by different reviewers. PMID:18024534
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Oldenmenger, Wendy H; de Raaf, Pleun J; de Klerk, Cora; van der Rijt, Carin C D
2013-06-01
To improve the management of cancer-related symptoms, systematic screening is necessary, often performed by using 0-10 numeric rating scales. Cut points are used to determine if scores represent clinically relevant burden. The aim of this systematic review was to explore the evidence on cut points for the symptoms of the Edmonton Symptom Assessment Scale. Relevant literature was searched in PubMed, CINAHL®, Embase, and PsycINFO®. We defined a cut point as the lower bound of the scores representing moderate or severe burden. Eighteen articles were eligible for this review. Cut points were determined using the interference with daily life, another symptom-related method, or a verbal scale. For pain, cut point 5 and, to a lesser extent, cut point 7 were found as the optimal cut points for moderate pain and severe pain, respectively. For moderate tiredness, the best cut point seemed to be cut point 4. For severe tiredness, both cut points 7 and 8 were suggested frequently. A lack of evidence exists for nausea, depression, anxiety, drowsiness, appetite, well-being, and shortness of breath. Few studies suggested a cut point below 4. For many symptoms, there is no clear evidence as to what the optimal cut points are. In daily clinical practice, a symptom score ≥4 is recommended as a trigger for a more comprehensive symptom assessment. Until there is more evidence on the optimal cut points, we should hold back using a certain cut point in quality indicators and be cautious about strongly recommending a certain cut point in guidelines. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Measurement of external forces and torques on a large pointing system
NASA Technical Reports Server (NTRS)
Morenus, R. C.
1980-01-01
Methods of measuring external forces and torques are discussed, in general and as applied to the Large Pointing System wind tunnel tests. The LPS tests were in two phases. The first test was a preliminary test of three models representing coelostat, heliostat, and on-gimbal telescope configurations. The second test explored the coelostat configuration in more detail. The second test used a different setup for measuring external loads. Some results are given from both tests.
Fast RBF OGr for solving PDEs on arbitrary surfaces
NASA Astrophysics Data System (ADS)
Piret, Cécile; Dunn, Jarrett
2016-10-01
The Radial Basis Functions Orthogonal Gradients method (RBF-OGr) was introduced in [1] to discretize differential operators defined on arbitrary manifolds defined only by a point cloud. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent complex geometries in any spatial dimension. A large limitation of the RBF-OGr method was its large computational complexity, which greatly restricted the size of the point cloud. In this paper, we apply the RBF-Finite Difference (RBF-FD) technique to the RBF-OGr method for building sparse differentiation matrices discretizing continuous differential operators such as the Laplace-Beltrami operator. This method can be applied to solving PDEs on arbitrary surfaces embedded in ℛ3. We illustrate the accuracy of our new method by solving the heat equation on the unit sphere.
Aerodynamic heating to representative SRB and ET protuberances
NASA Technical Reports Server (NTRS)
Engel, C. D.; Lapointe, J. K.
1979-01-01
Heating data and data scaling methods which can be used on representative solid rocket booster and external tank (ET) protuberances are described. Topics covered include (1) ET geometry and heating points; (2) interference heating test data (51A); (3) heat transfer data from tests FH-15 and FH-16; (4) individual protuberance data; and (5) interference heating of paint data from test IH-42. A set of drawings of the ET moldline and protuberances is included.
NASA Astrophysics Data System (ADS)
Zhang, Lucy
In this talk, we show a robust numerical framework to model and simulate gas-liquid-solid three-phase flows. The overall algorithm adopts a non-boundary-fitted approach that avoids frequent mesh-updating procedures by defining independent meshes and explicit interfacial points to represent each phase. In this framework, we couple the immersed finite element method (IFEM) and the connectivity-free front tracking (CFFT) method that model fluid-solid and gas-liquid interactions, respectively, for the three-phase models. The CFFT is used here to simulate gas-liquid multi-fluid flows that uses explicit interfacial points to represent the gas-liquid interface and for its easy handling of interface topology changes. Instead of defining different levels simultaneously as used in level sets, an indicator function naturally couples the two methods together to represent and track each of the three phases. Several 2-D and 3-D testing cases are performed to demonstrate the robustness and capability of the coupled numerical framework in dealing with complex three-phase problems, in particular free surfaces interacting with deformable solids. The solution technique offers accuracy and stability, which provides a means to simulate various engineering applications. The author would like to acknowledge the supports from NIH/DHHS R01-2R01DC005642-10A1 and the National Natural Science Foundation of China (NSFC) 11550110185.
Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A
2010-03-29
We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
The delineation of a polygon layer representing the direct upland runoff contribution to esturine wetland polygons can be a useful tool in estuarine wetland assessment. However, the traditional methods of watershed delineation using pour points and digital elevation models (DEMs)...
Scale Reliability Evaluation with Heterogeneous Populations
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.; Piazzolla, Sabino
2012-01-01
The objective of this study is to investigate the potential of using the bright and near-uniform Earth infrared (or wavelength infrared, LWIR) signature as a stable reference for accurate (micro-rad or less) inertial pointing and tracking on-board an space vehicle, including the determination of the fundamental limits of applicability of the proposed method for space missions. We demonstrate sub-micro radian level pointing accuracy under a representative set of disturbances experienced by the spacecraft in orbit.
NASA Technical Reports Server (NTRS)
Page, Lance; Shen, C. N.
1991-01-01
This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.
Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R
2016-01-05
Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
Method and apparatus for modeling interactions
Xavier, Patrick G.
2000-08-08
A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.
Min-Cut Based Segmentation of Airborne LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Ural, S.; Shan, J.
2012-07-01
Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
A comparison between GO/aperture-field and physical-optics methods for offset reflectors
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1984-01-01
Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.
The Use of Terrestrial Laser Scanning for Determining the Driver’s Field of Vision
Zemánek, Tomáš; Cibulka, Miloš; Skoupil, Jaromír
2017-01-01
Terrestrial laser scanning (TLS) is currently one of the most progressively developed methods in obtaining information about objects and phenomena. This paper assesses the TLS possibilities in determining the driver’s field of vision in operating agricultural and forest machines with movable and immovable components in comparison to the method of using two light point sources for the creation of shade images according to ISO (International Organization for Standardization) 5721-1. Using the TLS method represents a minimum time saving of 55% or more, according to the project complexity. The values of shading ascertained by using the shadow cast method by the point light sources are generally overestimated and more distorted for small cabin structural components. The disadvantage of the TLS method is the scanner’s sensitivity to a soiled or scratched cabin windscreen and to the glass transparency impaired by heavy tinting. PMID:28902177
Method of identifying clusters representing statistical dependencies in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
Approach is first to cluster and then to compute spatial boundaries for resulting clusters. Next step is to compute, from set of Monte Carlo samples obtained from scrambled data, estimates of probabilities of obtaining at least as many points within boundaries as were actually observed in original data.
Evaluation of Measurement Instrument Criterion Validity in Finite Mixture Settings
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong
2016-01-01
A method for evaluating the validity of multicomponent measurement instruments in heterogeneous populations is discussed. The procedure can be used for point and interval estimation of criterion validity of linear composites in populations representing mixtures of an unknown number of latent classes. The approach permits also the evaluation of…
Suspect Screening and Non-Targeted Analysis of Drinking Water Using Point-Of-Use Filters
Monitored contaminants in drinking water represent a small portion of the total compounds present, many of which may be relevant to human health. To understand the totality of human exposure to compounds in drinking water, broader monitoring methods are imperative. In an effort t...
NASA Astrophysics Data System (ADS)
Budiarsa, I. N.; Gde Antara, I. N.; Dharma, Agus; Karnata, I. N.
2018-04-01
Under an indentation, the material undergoes a complex deformation. One of the most effective ways to analyse indentation has been the representative method. The concept coupled with finite element (FE) modelling has been used successfully in analysing sharp indenters. It is of great importance to extend this method to spherical indentation and associated hardness system. One particular case is the Rockwell B test, where the hardness is determined by two points on the P-h curve of a spherical indenter. In this case, an established link between materials parameters and P-h curves can naturally lead to direct hardness estimation from the materials parameters (e.g. yield stress (y) and work hardening coefficients (n)). This could provide a useful tool for both research and industrial applications. Two method to predict p-h curve in spherical indentation has been established. One is use method using C1-C2 polynomial equation approach and another one by depth approach. Both approach has been successfully. An effective method in representing the P-h curves using a normalized representative stress concept was established. The concept and methodology developed is used to predict hardness (HRB) values of materials through direct analysis and validated with experimental data on selected samples of steel.
Dynamic Model of Applied Facial Anatomy with Emphasis on Teaching of Botulinum Toxin A
2017-01-01
Background: The use of botulinum toxin type A is considered one of the most revolutionary and promising face rejuvenation methods. Although rare, most of the complications secondary to the use of botulinum toxin A are technician dependent. Among the major shortcomings identified in the toxin administration education is unfamiliarity with applied anatomy. This article proposes the use of body painting as an innovative method of teaching the application of botulinum toxin A. Methods: Using the body painting technique, facial anatomy was represented on the face of a model showing the major muscle groups of botulinum toxin A targets. Photographic records and films were made for documentation of represented muscles at rest and contraction. Results: Using the body painting technique, each of the muscles involved in facial expression and generation of hyperkinetic wrinkles can be faithfully reproduced on the model’s face. The documentation of the exact position of the points of application, the distribution of the feature points in the muscular area, the proper angulation and syringe grip, as well as the correlation of the points of application with the presence of hyperkinetic wrinkles, could be properly registered, providing professional training with information of great practical importance, development of highly effective treatments, and low complication rates. Conclusion: By making it possible to interrelate anatomy of a function, body painting is proposed in the present study as an innovative method, which in a demonstrative and highly didactic manner presents great potential as a teaching tool in the application of botulinum toxin A. PMID:29263949
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1993-01-01
Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.
Three-dimensional elliptic grid generation technique with application to turbomachinery cascades
NASA Technical Reports Server (NTRS)
Chen, S. C.; Schwab, J. R.
1988-01-01
Described is a numerical method for generating 3-D grids for turbomachinery computational fluid dynamic codes. The basic method is general and involves the solution of a quasi-linear elliptic partial differential equation via pointwise relaxation with a local relaxation factor. It allows specification of the grid point distribution on the boundary surfaces, the grid spacing off the boundary surfaces, and the grid orthogonality at the boundary surfaces. A geometry preprocessor constructs the grid point distributions on the boundary surfaces for general turbomachinery cascades. Representative results are shown for a C-grid and an H-grid for a turbine rotor. Two appendices serve as user's manuals for the basic solver and the geometry preprocessor.
Analysing seismic-source mechanisms by linear-programming methods.
Julian, B.R.
1986-01-01
Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the 6-D space of moment-tensor components. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes. -from Author
Comparison of Two Entry Methods for Laparoscopic Port Entry: Technical Point of View
Toro, Adriana; Mannino, Maurizio; Cappello, Giovanni; Di Stefano, Andrea; Di Carlo, Isidoro
2012-01-01
Laparoscopic entry is a blind procedure and it often represents a problem for all the related complications. In the last three decades, rapid advances in laparoscopic surgery have made it an invaluable part of general surgery, but there remains no clear consensus on an optimal method of entry into the peritoneal cavity. The aim of this paper is to focus on the evolution of two used methods of entry into the peritoneal cavity in laparoscopic surgery. PMID:22761542
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yan; Treado, Stephen J.; Messner, John I.
Building control systems for Heating, Ventilation, and Air Conditioning (HVAC) play a key role in realizing the functionality and operation of building systems and components. Building Control Knowledge (BCK) is the logic and algorithms embedded throughout building control system. There are different methods to represent the BCK. These methods differ in the selection of BCK representing elements and the format of those elements. There is a lack of standard data schema, for storing, retrieving, and reusing structured BCK. In this study, a modular data schema is created for BCK representation. The data schema contains eleven representing elements, i.e., control modulemore » name, operation mode, system schematic, control flow diagram, data point, alarm, parameter, control sequence, function, and programming code. Each element is defined with specific attributes. This data schema is evaluated through a case study demonstration. The demonstration shows a new way to represent the BCK with standard formats.« less
NASA Astrophysics Data System (ADS)
Margaris, I.; Paltoglou, V.; Flytzanis, N.
2018-05-01
In this work we present a method of representing terms in the current-phase-relation of a ballistic Josephson junction by combinations of diagrams, used in previous work to represent an equivalent of the matching condition determinant of the junction. This is accomplished by the expansion of the logarithm of this determinant in Taylor series and keeping track of surviving terms, i.e. terms that do not annihilate each other. The types of the surviving terms are represented by connected graphs, whose points represent diagrammatic terms of the determinant expansion. Then the theory is applied to obtain approximations of the current-phase relation of relatively thick ballistic ferromagnetic Josephson junctions with non-collinear magnetizations. This demonstrates the versatility of the method in developing approximations schemes and providing physical insight into the nature of contributions to the supercurrent from the available particle excitations in the junction. We also discuss the strong second harmonic contribution to the supercurrent in junctions with three mutually orthogonal magnetization vectors and a weak intermediate ferromagnet.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
Rezvani, Alireza; Khalili, Abbas; Mazareie, Alireza; Gandomkar, Majid
2016-07-01
Nowadays, photovoltaic (PV) generation is growing increasingly fast as a renewable energy source. Nevertheless, the drawback of the PV system is its dependence on weather conditions. Therefore, battery energy storage (BES) can be considered to assist for a stable and reliable output from PV generation system for loads and improve the dynamic performance of the whole generation system in grid connected mode. In this paper, a novel topology of intelligent hybrid generation systems with PV and BES in a DC-coupled structure is presented. Each photovoltaic cell has a specific point named maximum power point on its operational curve (i.e. current-voltage or power-voltage curve) in which it can generate maximum power. Irradiance and temperature changes affect these operational curves. Therefore, the nonlinear characteristic of maximum power point to environment has caused to development of different maximum power point tracking techniques. In order to capture the maximum power point (MPP), a hybrid fuzzy-neural maximum power point tracking (MPPT) method is applied in the PV system. Obtained results represent the effectiveness and superiority of the proposed method, and the average tracking efficiency of the hybrid fuzzy-neural is incremented by approximately two percentage points in comparison to the conventional methods. It has the advantages of robustness, fast response and good performance. A detailed mathematical model and a control approach of a three-phase grid-connected intelligent hybrid system have been proposed using Matlab/Simulink. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Weighted regularized statistical shape space projection for breast 3D model reconstruction.
Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González
2018-07-01
The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
Splatterplots: overcoming overdraw in scatter plots.
Mayorga, Adrian; Gleicher, Michael
2013-09-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Splatterplots: Overcoming Overdraw in Scatter Plots
Mayorga, Adrian; Gleicher, Michael
2014-01-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen. PMID:23846097
Splatterplots: Overcoming Overdraw in Scatter Plots.
Mayorga, Adrian; Gleicher, Michael
2013-03-20
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Anticipatory control of xenon in a pressurized water reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Impink, A.J. Jr.
1987-02-10
A method is described for automatically dampening xenon-135 spatial transients in the core of a pressurized water reactor having control rods which regulate reactor power level, comprising the steps of: measuring the neutron flu in the reactor core at a plurality of axially spaced locations on a real-time, on-line basis; repetitively generating from the neutron flux measurements, on a point-by-point basis, signals representative of the current axial distribution of xenon-135, and signals representative of the current rate of change of the axial distribution of xenon-135; generating from the xenon-135 distribution signals and the rate of change of xenon distribution signals,more » control signals for reducing the xenon transients; and positioning the control rods as a function of the control signals to dampen the xenon-135 spatial transients.« less
Split delivery vehicle routing problem with time windows: a case study
NASA Astrophysics Data System (ADS)
Latiffianti, E.; Siswanto, N.; Firmandani, R. A.
2018-04-01
This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Protection Relaying Scheme Based on Fault Reactance Operation Type
NASA Astrophysics Data System (ADS)
Tsuji, Kouichi
The theories of operation of existing relays are roughly divided into two types: one is the current differential types based on Kirchhoff's first law and the other is impedance types based on second law. We can apply the Kirchhoff's laws to strictly formulate fault phenomena, so the circuit equations are represented non linear simultaneous equations with variables fault point k and fault resistance Rf. This method has next two defect. 1) heavy computational burden for the iterative calculation on N-R method, 2) relay operator can not easily understand principle of numerical matrix operation. The new protection relay principles we proposed this paper focuses on the fact that the reactance component on fault point is almost zero. Two reactance Xf(S), Xf(R) on branch both ends are calculated by operation of solving linear equations. If signs of Xf(S) and Xf(R) are not same, it can be judged that the fault point exist in the branch. This reactance Xf corresponds to difference of branch reactance between actual fault point and imaginaly fault point. And so relay engineer can to understand fault location by concept of “distance". The simulation results using this new method indicates the highly precise estimation of fault locations compared with the inspected fault locations on operating transmission lines.
Satellite Articulation Characterization from an Image Trajectory Matrix Using Optimization
NASA Astrophysics Data System (ADS)
Curtis, D. H.; Cobb, R. G.
Autonomous on-orbit satellite servicing and inspection benefits from an inspector satellite that can autonomously gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. This paper presents a method of characterizing the articulation of a satellite using resolved monocular imagery. A simulated point cloud representing a nominal satellite with articulating solar panels and a complex articulating appendage is developed and projected to the image coordinates that would be seen from an inspector following a given inspection route. A method is developed to analyze the resulting image trajectory matrix. The developed method takes advantage of the fact that the route of the inspector satellite is known to assist in the segmentation of the points into different rigid bodies, the creation of the 3D point cloud, and the identification of the articulation parameters. Once the point cloud and the articulation parameters are calculated, they can be compared to the known truth. The error in the calculated point cloud is determined as well as the difference between the true workspace of the satellite and the calculated workspace. These metrics can be used to compare the quality of various inspection routes for characterizing the satellite and its articulation.
Efficient Jacobian inversion for the control of simple robot manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1988-01-01
Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
Urban sound energy reduction by means of sound barriers
NASA Astrophysics Data System (ADS)
Iordache, Vlad; Ionita, Mihai Vlad
2018-02-01
In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.
Impurity Correction Techniques Applied to Existing Doping Measurements of Impurities in Zinc
NASA Astrophysics Data System (ADS)
Pearce, J. V.; Sun, J. P.; Zhang, J. T.; Deng, X. L.
2017-01-01
Impurities represent the most significant source of uncertainty in most metal fixed points used for the realization of the International Temperature Scale of 1990 (ITS-90). There are a number of different methods for quantifying the effect of impurities on the freezing temperature of ITS-90 fixed points, many of which rely on an accurate knowledge of the liquidus slope in the limit of low concentration. A key method of determining the liquidus slope is to measure the freezing temperature of a fixed-point material as it is progressively doped with a known amount of impurity. Recently, a series of measurements of the freezing and melting temperature of `slim' Zn fixed-point cells doped with Ag, Fe, Ni, and Pb were presented. Here, additional measurements of the Zn-X system are presented using Ga as a dopant, and the data (Zn-Ag, Zn-Fe, Zn-Ni, Zn-Pb, and Zn-Ga) have been re-analyzed to demonstrate the use of a fitting method based on Scheil solidification which is applied to both melting and freezing curves. In addition, the utility of the Sum of Individual Estimates method is explored with these systems in the context of a recently enhanced database of liquidus slopes of impurities in Zn in the limit of low concentration.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Ramírez-Vélez, R; Correa-Bautista, J E; Martínez-Torres, J; Méneses-Echavez, J F; González-Ruiz, K; González-Jiménez, E; Schmidt-RioValle, J; Lobelo, F
2016-01-01
Background/Objectives: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). These data are lacking for Colombian adults. This study aims at establishing smoothed centile charts and LMS tables for WC and WHtR; appropriate cutoffs were selected using receiver-operating characteristic analysis based on data from the representative sample. Subjects/Methods: We used data from the cross-sectional, national representative nutrition survey (ENSIN, 2010). A total of 83 220 participants (aged 20–64) were enroled. Weight, height, body mass index (BMI), WC and WHtR were measured and percentiles calculated using the LMS method (L (curve Box-Cox), M (curve median), and S (curve coefficient of variation)). Receiver operating characteristics curve analyses were used to evaluate the optimal cutoff point of WC and WHtR for overweight and obesity based on WHO definitions. Results: Reference values for WC and WHtR are presented. Mean WC and WHtR increased with age for both genders. We found a strong positive correlation between WC and BMI (r=0.847, P< 0.01) and WHtR and BMI (r=0.878, P<0.01). In obese men, the cutoff point value is 96.6 cm for the WC. In women, the cutoff point value is 91.0 cm for the WC. Receiver operating characteristic curve for WHtR was also obtained and the cutoff point value of 0.579 in men, and in women the cutoff point value was 0.587. A high sensitivity and specificity were obtained. Conclusions: This study presents first reference values of WC and WHtR for Colombians aged 20–64. Through LMS tables for adults, we hope to provide quantitative tools to study obesity and its complications. PMID:27026425
1980-09-01
where 4BD represents the instantaneous effect of the body, while OFS represents the free surface disturbance generated by the body over all previous...acceleration boundary condition. This deter- mines the time-derivative of the body-induced component of the flow, 4BD (as well as OBD through integration...panel with uniform density ei acting over a surface of area Ai is replaced by a single point source with strength s i(t) - A i(a i(t n ) + (t-t n ) G( td
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel
GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less
NASA Astrophysics Data System (ADS)
Leherte, L.; Allen, F. H.; Vercauteren, D. P.
1995-04-01
A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.
NASA Astrophysics Data System (ADS)
Leherte, Laurence; Allen, Frank H.
1994-06-01
A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.
NASA Astrophysics Data System (ADS)
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint
NASA Astrophysics Data System (ADS)
Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.
2017-09-01
For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noel, Nakita K.; Habisreutinger, Severin N.; Wenger, Bernard
2017-01-01
Perovskite-based photovoltaics have, in recent years, become poised to revolutionise the solar industry. While there have been many approaches taken to the deposition of this material, one-step spin-coating remains the simplest and most widely used method in research laboratories. Although spin-coating is not recognised as the ideal manufacturing methodology, it represents a starting point from which more scalable deposition methods, such as slot-dye coating or ink-jet printing can be developed. Here, we introduce a new, low-boiling point, low viscosity solvent system that enables rapid, room temperature crystallisation of methylammonium lead triiodide perovskite films, without the use of strongly coordinating aproticmore » solvents. Through the use of this solvent, we produce dense, pinhole free films with uniform coverage, high specularity, and enhanced optoelectronic properties. We fabricate devices and achieve stabilised power conversion efficiencies of over 18% for films which have been annealed at 100 degrees C, and over 17% for films which have been dried under vacuum and have undergone no thermal processing. This deposition technique allows uniform coating on substrate areas of up to 125 cm2, showing tremendous promise for the fabrication of large area, high efficiency, solution processed devices, and represents a critical step towards industrial upscaling and large area printing of perovskite solar cells.« less
Kociolek, Aaron M; Keir, Peter J
2011-07-07
A detailed musculoskeletal model of the human hand is needed to investigate the pathomechanics of tendon disorders and carpal tunnel syndrome. The purpose of this study was to develop a biomechanical model with realistic flexor tendon excursions and moment arms. An existing upper extremity model served as a starting point, which included programmed movement of the index finger. Movement capabilities were added for the other fingers. Metacarpophalangeal articulations were modelled as universal joints to simulate flexion/extension and abduction/adduction while interphalangeal articulations used hinges to represent flexion. Flexor tendon paths were modelled using two approaches. The first method constrained tendons with control points, representing annular pulleys. The second technique used wrap objects at the joints as tendon constraints. Both control point and joint wrap models were iteratively adjusted to coincide with tendon excursions and moment arms from a anthropometric regression model using inputs for a 50th percentile male. Tendon excursions from the joint wrap method best matched the regression model even though anatomic features of the tendon paths were not preserved (absolute differences: mean<0.33 mm, peak<0.74 mm). The joint wrap model also produced similar moment arms to the regression (absolute differences: mean<0.63 mm, peak<1.58 mm). When a scaling algorithm was used to test anthropometrics, the scaled joint wrap models better matched the regression than the scaled control point models. Detailed patient-specific anatomical data will improve model outcomes for clinical use; however, population studies may benefit from simplified geometry, especially with anthropometric scaling. Copyright © 2011 Elsevier Ltd. All rights reserved.
Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis
L.R. Grosenbaugh
1958-01-01
Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...
ERIC Educational Resources Information Center
Criado, Raquel; Sanchez, Aquilino
2009-01-01
The goal of this paper is to verify up to what point ELT textbooks used in Spanish educational settings comply with the official regulations prescribed, which fully advocate the Communicative Language Teaching Method (CLT). For that purpose, seven representative coursebooks of different educational levels and modalities in Spain--secondary, upper…
NASA Technical Reports Server (NTRS)
Hedgley, D. R., Jr.
1982-01-01
The requirements for computer-generated perspective projections of three dimensional objects has escalated. A general solution was developed. The theoretical solution to this problem is presented. The method is very efficient as it minimizes the selection of points and comparison of line segments and hence avoids the devastation of square-law growth.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) The concentration of pollutants discharged in mine drainage from mines, either open-pit or underground, that produce uranium ore, including mines using in-situ leach methods, shall not exceed: Effluent...). Except as provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source...
Code of Federal Regulations, 2011 CFR
2011-07-01
... concentration of pollutants discharged in mine drainage from mines, either open-pit or underground, that produce uranium ore, including mines using in-situ leach methods, shall not exceed: Effluent characteristic... provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source subject to...
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The concentration of pollutants discharged in mine drainage from mines, either open-pit or underground, that produce uranium ore, including mines using in-situ leach methods, shall not exceed: Effluent...). Except as provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source...
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration of pollutants discharged in mine drainage from mines, either open-pit or underground, that produce uranium ore, including mines using in-situ leach methods, shall not exceed: Effluent characteristic... provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source subject to...
USDA-ARS?s Scientific Manuscript database
The molar balance equations of indirect calorimetry are treated from the point of view of cause-effect relationship where the gaseous exchange rates representing the unknown causes heed to be inferred from a known noisy effect – gaseous concentrations. Two methods of such inversion are analyzed. Th...
A 3D front tracking method on a CPU/GPU system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo, Wurigen; Grove, John
2011-01-21
We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.
Analysis of Free Modeling Predictions by RBO Aleph in CASP11
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2015-01-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue–residue contact prediction by EPC-map and contact–guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. PMID:26492194
Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio
2018-04-01
A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method
NASA Astrophysics Data System (ADS)
Huth, A.; Smith, B. E.
2017-12-01
Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store any variables permanently, and can be replaced at any time step. MPM naturally tracks the ice front and grounding line at a subgrid scale. MPM also facilitates the implementation of rift propagation in arbitrary directions, and therefore shows promise for predicting calving events. To our knowledge, this is the first application of MPM to ice flow modeling.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Stationkeeping of Lissajous Trajectories in the Earth-Moon System with Applications to ARTEMIS
NASA Technical Reports Server (NTRS)
Folta, D. C.; Pavlak, T. A.; Howell, K. C.; Woodard, M. A.; Woodfork, D. W.
2010-01-01
In the last few decades, several missions have successfully exploited trajectories near the.Sun-Earth L1 and L2 libration points. Recently, the collinear libration points in the Earth-Moon system have emerged as locations with immediate application. Most libration point orbits, in any system, are inherently unstable. and must be controlled. To this end, several stationkeeping strategies are considered for application to ARTEMIS. Two approaches are examined to investigate the stationkeeping problem in this regime and the specific options. available for ARTEMIS given the mission and vehicle constraints. (I) A baseline orbit-targeting approach controls the vehicle to remain near a nominal trajectory; a related global optimum search method searches all possible maneuver angles to determine an optimal angle and magnitude; and (2) an orbit continuation method, with various formulations determines maneuver locations and minimizes costs. Initial results indicate that consistent stationkeeping costs can be achieved with both approaches and the costs are reasonable. These methods are then applied to Lissajous trajectories representing a baseline ARTEMIS libration orbit trajectory.
Feeley, Thomas Hugh; Anker, Ashley E; Evans, Melanie; Reynolds-Tylus, Tobias
2017-09-01
Examination of efficacy of motor vehicle representative educational training and dissemination of promotional materials as a means to promote organ donation enrollments in New York State. To increase the number of New York State residents who consent to donation through the department of motor vehicle transactions during project period. County-run motor vehicle offices across New York State. Customers who present to New York Department of Motor Vehicle offices and the representative who work at designated bureaus. point-of-decision materials including promotional posters, brochures, website, and the motor vehicle representative training sessions. Reasons for enrollment decision, knowledge/experience with donation, monthly consent rates, enrollment in state organ, and tissue registry. Customers who elected not to register reported no reason or uncertainty surrounding enrollment. The representatives reported experience with donation, discussion with customers, and need for additional education on organ donation. Enrollment cards were mailed to 799 project staff; counties where offices participated in intervention did not indicate significantly higher monthly enrollments when comparing pre- to postenrollment rates. Use of point-of-decision materials and enrollment cards proved inexpensive method to register customers with a 3.6% return rate. Customers report low (27%) enrollment rate and reticence to consent to donation. Educational training sessions with representatives did not yield significant enrollment increases when evaluating data at county-level enrollment.
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
NASA Astrophysics Data System (ADS)
Divine, D. V.; Godtliebsen, F.; Rue, H.
2012-01-01
The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gwyther, Ceri L.; Jones, David L.; Golyshin, Peter N.
Highlights: Black-Right-Pointing-Pointer Bioreduction is a novel on-farm storage option for livestock carcasses. Black-Right-Pointing-Pointer Legislation demands that pathogens are contained and do not proliferate during carcass storage. Black-Right-Pointing-Pointer We examined the survival of key pathogens in lab-scale bioreduction vessels. Black-Right-Pointing-Pointer Pathogen numbers reduced in the resulting liquor waste and bioaerosols. Black-Right-Pointing-Pointer The results indicate that bioreduction should be validated for industry use. - Abstract: The EU Animal By-Products Regulations generated the need for novel methods of storage and disposal of dead livestock. Bioreduction prior to rendering or incineration has been proposed as a practical and potentially cost-effective method; however, its biosecuritymore » characteristics need to be elucidated. To address this, Salmonella enterica (serovars Senftenberg and Poona), Enterococcus faecalis, Campylobacter jejuni, Campylobacter coli and a lux-marked strain of Escherichia coli O157 were inoculated into laboratory-scale bioreduction vessels containing sheep carcass constituents. Numbers of all pathogens and the metabolic activity of E. coli O157 decreased significantly within the liquor waste over time, and only E. faecalis remained detectable after 3 months. Only very low numbers of Salmonella spp. and E. faecalis were detected in bioaerosols, and only at initial stages of the trial. These results further indicate that bioreduction represents a suitable method of storing and reducing the volume of livestock carcasses prior to ultimate disposal.« less
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
An international point source outbreak of typhoid fever: a European collaborative investigation*
Stanwell-Smith, R. E.; Ward, L. R.
1986-01-01
A point source outbreak of Salmonella typhi, degraded Vi-strain 22, affecting 32 British visitors to Kos, Greece, in 1983 was attributed by a case—control study to the consumption of a salad at one hotel. This represents the first major outbreak of typhoid fever in which a salad has been identified as the vehicle. The source of the infection was probably a carrier in the hotel staff. The investigation demonstrates the importance of national surveillance, international cooperation, and epidemiological methods in the investigation and control of major outbreaks of infection. PMID:3488842
Processor farming in two-level analysis of historical bridge
NASA Astrophysics Data System (ADS)
Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.
2017-11-01
This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.
Method and apparatus of assessing down-hole drilling conditions
Hall, David R [Provo, UT; Pixton, David S [Lehl, UT; Johnson, Monte L [Orem, UT; Bartholomew, David B [Springville, UT; Fox, Joe [Spanish Fork, UT
2007-04-24
A method and apparatus for use in assessing down-hole drilling conditions are disclosed. The apparatus includes a drill string, a plurality of sensors, a computing device, and a down-hole network. The sensors are distributed along the length of the drill string and are capable of sensing localized down-hole conditions while drilling. The computing device is coupled to at least one sensor of the plurality of sensors. The data is transmitted from the sensors to the computing device over the down-hole network. The computing device analyzes data output by the sensors and representative of the sensed localized conditions to assess the down-hole drilling conditions. The method includes sensing localized drilling conditions at a plurality of points distributed along the length of a drill string during drilling operations; transmitting data representative of the sensed localized conditions to a predetermined location; and analyzing the transmitted data to assess the down-hole drilling conditions.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Two Archetypes of Motor Control Research.
Latash, Mark L
2010-07-01
This reply to the Commentaries is focused on two archetypes of motor control research, one based on physics and physiology and the other based on control theory and ideas of neural computations. The former approach, represented by the equilibrium-point hypothesis, strives to discover the physical laws and salient physiological variables that make purposeful coordinated movements possible. The latter approach, represented by the ideas of internal models and optimal control, tries to apply methods of control developed for man-made inanimate systems to the human body. Specific issues related to control with subthreshold membrane depolarization, motor redundancy, and the idea of synergies are briefly discussed.
Some numerical methods for the Hele-Shaw equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, N.
1994-03-01
Tryggvason and Aref used a boundary integral method and the vortex-in-cell method to evolve the interface between two fluids in a Hele-Shaw cell. The method gives excellent results for intermediate values of the nondimensional surface tension parameter. The results are different from the predicted results of McLean and Saffman for small surface tension. For large surface tension, there are some numerical problems. In this paper, we implement the method of Tryggvason and Aref but use the point vortex method instead of the vortex-in-cell method. A parametric spline is used to represent the interface. The finger widths obtained agree well withmore » those predicted by McLean and Saffman. We conclude the the method of Tryggvason and Aref can provide excellent results but that the vortex-in-cell method may not be the method of choice for extreme values of the surface tension parameter. In a second method, we represent the interface with a Fourier representation. In addition, an alternative way of discretizing the boundary integral is used. Our results are compared to the linearized theory and the results of McLean and Saffman and are shown to be highly accurate. 21 refs., 4 figs., 2 tabs.« less
The small low SNR target tracking using sparse representation information
NASA Astrophysics Data System (ADS)
Yin, Lifan; Zhang, Yiqun; Wang, Shuo; Sun, Chenggang
2017-11-01
Tracking small targets, such as missile warheads, from a remote distance is a difficult task since the targets are "points" which are similar to sensor's noise points. As a result, traditional tracking algorithms only use the information contained in point measurement, such as the position information and intensity information, as characteristics to identify targets from noise points. But in fact, as a result of the diffusion of photon, any small target is not a point in the focal plane array and it occupies an area which is larger than one sensor cell. So, if we can take the geometry characteristic into account as a new dimension of information, it will be of helpful in distinguishing targets from noise points. In this paper, we use a novel method named sparse representation (SR) to depict the geometry information of target intensity and define it as the SR information of target. Modeling the intensity spread and solving its SR coefficients, the SR information is represented by establishing its likelihood function. Further, the SR information likelihood is incorporated in the conventional Probability Hypothesis Density (PHD) filter algorithm with point measurement. To illustrate the different performances of algorithm with or without the SR information, the detection capability and estimation error have been compared through simulation. Results demonstrate the proposed method has higher estimation accuracy and probability of detecting target than the conventional algorithm without the SR information.
Strong imploding shock - The representative curve
NASA Astrophysics Data System (ADS)
Mishkin, E. A.; Alejaldre, C.
1981-02-01
The representative curve of the ideal gas behind the front of a spherically or cylindrically asymmetric strong imploding shock is derived. The partial differential equations of mass, momentum and energy conservation are reduced to a set of ordinary differential equations by the method of quasi-separation of variables, following which the reduced pressure and density as functions of the radius with respect to the shock front are explicit functions of coordinates defining the phase plane of the self-similar solution. The curve in phase space representing the state of the imploded gas behind the shock front is shown to pass through the point where the reduced pressure is maximum, which is located somewhat behind the shock front and ahead of the tail of the shock.
Fast generation of computer-generated holograms using wavelet shrinkage.
Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-01-09
Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., radium and vanadium including mill-mine facilities and mines using in-situ leach methods shall not exceed...). Except as provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source... available (BPT): (a) The concentration of pollutants discharged in mine drainage from mines, either open-pit...
Code of Federal Regulations, 2011 CFR
2011-07-01
..., radium and vanadium including mill-mine facilities and mines using in-situ leach methods shall not exceed...). Except as provided in subpart L of this part and 40 CFR 125.30 through 125.32, any existing point source... available (BPT): (a) The concentration of pollutants discharged in mine drainage from mines, either open-pit...
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.
Liu, Zhiyuan; Wang, Changhui
2015-10-23
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
A new template matching method based on contour information
NASA Astrophysics Data System (ADS)
Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong
2014-11-01
Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.
Point-particle method to compute diffusion-limited cellular uptake.
Sozza, A; Piazza, F; Cencini, M; De Lillo, F; Boffetta, G
2018-02-01
We present an efficient point-particle approach to simulate reaction-diffusion processes of spherical absorbing particles in the diffusion-limited regime, as simple models of cellular uptake. The exact solution for a single absorber is used to calibrate the method, linking the numerical parameters to the physical particle radius and uptake rate. We study the configurations of multiple absorbers of increasing complexity to examine the performance of the method by comparing our simulations with available exact analytical or numerical results. We demonstrate the potential of the method to resolve the complex diffusive interactions, here quantified by the Sherwood number, measuring the uptake rate in terms of that of isolated absorbers. We implement the method in a pseudospectral solver that can be generalized to include fluid motion and fluid-particle interactions. As a test case of the presence of a flow, we consider the uptake rate by a particle in a linear shear flow. Overall, our method represents a powerful and flexible computational tool that can be employed to investigate many complex situations in biology, chemistry, and related sciences.
NASA Technical Reports Server (NTRS)
Kassemi, M.; Naraghi, M. H. N.
1993-01-01
A new numerical method is presented for the analysis of combined natural convection and radiation heat transfer with applications in many engineering situations such as materials processing, combustion and fire research. Because of the recent interest in the low gravity environment of space, attention is devoted to both 1-g and low-g applications. The two-dimensional mathematical model is represented by a set of coupled nonlinear integro-partial differential equations. Radiative exchange is formulated using the Discrete Exchange Factor method (DEF). This method considers point to point exchange and provides accurate results over a wide range of radiation parameters. Numerical results show that radiation significantly influences the flow and heat transfer in both low-g and 1-g applications. In the low-g environment, convection is weak, and radiation can easily become the dominant heat transfer mode. It is also shown that volumetric heating by radiation gives rise to an intricate cell pattern in the top heated enclosure.
Regional mapping of soil parent material by machine learning based on point data
NASA Astrophysics Data System (ADS)
Lacoste, Marine; Lemercier, Blandine; Walter, Christian
2011-10-01
A machine learning system (MART) has been used to predict soil parent material (SPM) at the regional scale with a 50-m resolution. The use of point-specific soil observations as training data was tested as a replacement for the soil maps introduced in previous studies, with the aim of generating a more even distribution of training data over the study area and reducing information uncertainty. The 27,020-km 2 study area (Brittany, northwestern France) contains mainly metamorphic, igneous and sedimentary substrates. However, superficial deposits (aeolian loam, colluvial and alluvial deposits) very often represent the actual SPM and are typically under-represented in existing geological maps. In order to calibrate the predictive model, a total of 4920 point soil descriptions were used as training data along with 17 environmental predictors (terrain attributes derived from a 50-m DEM, as well as emissions of K, Th and U obtained by means of airborne gamma-ray spectrometry, geological variables at the 1:250,000 scale and land use maps obtained by remote sensing). Model predictions were then compared: i) during SPM model creation to point data not used in model calibration (internal validation), ii) to the entire point dataset (point validation), and iii) to existing detailed soil maps (external validation). The internal, point and external validation accuracy rates were 56%, 81% and 54%, respectively. Aeolian loam was one of the three most closely predicted substrates. Poor prediction results were associated with uncommon materials and areas with high geological complexity, i.e. areas where existing maps used for external validation were also imprecise. The resultant predictive map turned out to be more accurate than existing geological maps and moreover indicated surface deposits whose spatial coverage is consistent with actual knowledge of the area. This method proves quite useful in predicting SPM within areas where conventional mapping techniques might be too costly or lengthy or where soil maps are insufficient for use as training data. In addition, this method allows producing repeatable and interpretable results, whose accuracy can be assessed objectively.
Recognition of isotropic plane target from RCS diagram
NASA Astrophysics Data System (ADS)
Saillard, J.; Chassay, G.
1981-06-01
The use of electromagnetic waves for the recognition of a structure represented by point scatterers is seen as posing a fundamental problem. It is noted that much research has been done on this subject and that the study of aircraft observed in the yaw plane gives interesting results. To apply these methods, however, it is necessary to use many sophisticated acquisition systems. A method is proposed which can be applied to plane structures composed of isotropic scatterers. The method is considered to be of interest because it uses only power measurements and requires only a classical tracking radar.
A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.
2015-01-01
A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.
Makeyev, Oleksandr; Lee, Colin; Besio, Walter G
2017-07-01
Tripolar concentric ring electrodes are showing great promise in a range of applications including braincomputer interface and seizure onset detection due to their superiority to conventional disc electrodes, in particular, in accuracy of surface Laplacian estimation. Recently, we proposed a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. This approach has been used to introduce novel multipolar and variable inter-ring distances concentric ring electrode configurations verified using finite element method. The obtained results suggest their potential to improve Laplacian estimation compared to currently used constant interring distances tripolar concentric ring electrodes. One of the main limitations of the proposed (4n + 1)-point method is that the radius of the central disc and the widths of the concentric rings are not included and therefore cannot be optimized. This study incorporates these two parameters by representing the central disc and both concentric rings as clusters of points with specific radius and widths respectively as opposed to the currently used single point and concentric circles. A proof of concept Laplacian estimate is derived for a tripolar concentric ring electrode with non-negligible radius of the central disc and non-negligible widths of the concentric rings clearly demonstrating how both of these parameters can be incorporated into the (4n + 1)-point method.
Simulations of Sea-Ice Dynamics Using the Material-Point Method
NASA Technical Reports Server (NTRS)
Sulsky, D.; Schreyer, H.; Peterson, K.; Nguyen, G.; Coon, G.; Kwok, R.
2006-01-01
In recent years, the availability of large volumes of recorded ice motion derived from high-resolution SAR data has provided an amazingly detailed look at the deformation of the ice cover. The deformation is dominated by the appearance of linear kinematic features that have been associated with the presence of leads. These remarkable data put us in a position to begin detailed evaluation of current coupled mechanical and thermodynamic models of sea ice. This presentation will describe the material point method (MPM) for solving these model equations. MPM is a numerical method for continuum mechanics that combines the best aspects of Lagrangian and Eulerian discretizations. The material points provide a Lagrangian description of the ice that models convection naturally. Thus, properties such as ice thickness and compactness are computed in a Lagrangian frame and do not suffer from errors associated with Eulerian advection schemes, such as artificial diffusion, dispersion, or oscillations near discontinuities. This desirable property is illustrated by solving transport of ice in uniform, rotational and convergent velocity fields. Moreover, the ice geometry is represented by unconnected material points rather than a grid. This representation facilitates modeling the large deformations observed in the Arctic, as well as localized deformation along leads, and admits a sharp representation of the ice edge. MPM also easily allows the use of any ice constitutive model. The versatility of MPM is demonstrated by using two constitutive models for simulations of wind-driven ice. The first model is a standard viscous-plastic model with two thickness categories. The MPM solution to the viscous-plastic model agrees with previously published results using finite elements. The second model is a new elastic-decohesive model that explicitly represents leads. The model includes a mechanism to initiate leads, and to predict their orientation and width. The elastic-decohesion model can provide similar overall deformation as the viscous-plastic model; however, explicit regions of opening and shear are predicted. Furthermore, the efficiency of MPM with the elastic-decohesive model is competitive with the current best methods for sea ice dynamics. Simulations will also be presented for an area of the Beaufort Sea, where predictions can be validated against satellite observations of the Arctic.
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, Seth; Evans, Charles R.
2010-10-15
We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Day, S. M.
2006-12-01
Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2016-06-01
Since its introduction in 1954, the Soil Conservation Service curve number (SCS-CN) method has become the standard tool, in practice, for estimating an event-based rainfall-runoff response. However, because of its empirical origins, the SCS-CN method is restricted to certain geographic regions and land use types. Moreover, it does not describe the spatial variability of runoff. To move beyond these limitations, we present a new theoretical framework for spatially lumped, event-based rainfall-runoff modeling. In this framework, we describe the spatially lumped runoff model as a point description of runoff that is upscaled to a watershed area based on probability distributions that are representative of watershed heterogeneities. The framework accommodates different runoff concepts and distributions of heterogeneities, and in doing so, it provides an implicit spatial description of runoff variability. Heterogeneity in storage capacity and soil moisture are the basis for upscaling a point runoff response and linking ecohydrological processes to runoff modeling. For the framework, we consider two different runoff responses for fractions of the watershed area: "prethreshold" and "threshold-excess" runoff. These occur before and after infiltration exceeds a storage capacity threshold. Our application of the framework results in a new model (called SCS-CNx) that extends the SCS-CN method with the prethreshold and threshold-excess runoff mechanisms and an implicit spatial description of runoff. We show proof of concept in four forested watersheds and further that the resulting model may better represent geographic regions and site types that previously have been beyond the scope of the traditional SCS-CN method.
Fischer, Claudia; Voss, Andreas
2014-01-01
Hypertensive pregnancy disorders affect 6 to 8 percent of all pregnancies which can cause severe complications for the mother and the fetus. The aim of this study was to develop a new method suitable for a three dimensional coupling analysis. Therefore, the three-dimensional segmented Poincaré plot analysis (SPPA3) is introduced that represents the Poincare analysis based on a cubic box model representation. The box representing the three dimensional phase space is (based on the SPPA method) subdivided into 12×12×12 equal cubelets according to the predefined range of signals and all single probabilities of occurring points in a specific cubelet related to the total number of points are calculated. From 10 healthy non-pregnant women, 66 healthy pregnant women and 56 hypertensive pregnant women suffering from chronic hypertension, gestational hypertension and preeclampsia, 30 minutes of beat-to-beat intervals (BBI), noninvasive blood pressure and respiration (RESP) were continuously recorded and analyzed. Couplings between the different signals were analyzed. The ability of SPPA3 for a screening could be confirmed by multivariate discriminant analysis differentiating between all pregnant woman and preeclampsia (index BBI3_SBP9_RESP6/ BBI8_SBP11_RESP4 leads to an area under the ROC curve of AUC=91.2%). In conclusion, SPPA3 could be a useful method for enhanced risk stratification in pregnant women.
Robust estimation of pulse wave transit time using group delay.
Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C
2014-03-01
To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.
Moja, Lorenzo; Kwag, Koren Hyogene
2015-01-01
The structure and aim of continuing medical education (CME) is shifting from the passive transmission of knowledge to a competency-based model focused on professional development. Self-directed learning is emerging as the foremost educational method for advancing competency-based CME. In a field marked by the constant expansion of knowledge, self-directed learning allows physicians to tailor their learning strategy to meet the information needs of practice. Point of care information services are innovative tools that provide health professionals with digested evidence at the front line to guide decision making. By mobilising self-directing learning to meet the information needs of clinicians at the bedside, point of care information services represent a promising platform for competency-based CME. Several points, however, must be considered to enhance the accessibility and development of these tools to improve competency-based CME and the quality of care. PMID:25655251
Joint surface modeling with thin-plate splines.
Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D
1999-10-01
Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.
The Intelligence-Religiosity Nexus: A Representative Study of White Adolescent Americans
ERIC Educational Resources Information Center
Nyborg, Helmuth
2009-01-01
The present study examined whether IQ relates systematically to denomination and income within the framework of the "g" nexus, using representative data from the National Longitudinal Study of Youth (NLSY97). Atheists score 1.95 IQ points higher than Agnostics, 3.82 points higher than Liberal persuasions, and 5.89 IQ points higher than…
Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K
2002-08-01
Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.
METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS
Hough, P.V.C.
1962-12-18
This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)
Human action recognition based on spatial-temporal descriptors using key poses
NASA Astrophysics Data System (ADS)
Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing
2014-11-01
Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.
Legibility Evaluation Using Point-of-regard Measurement
NASA Astrophysics Data System (ADS)
Saito, Daisuke; Saito, Keiichi; Saito, Masao
Web site visibility has become important because of the rapid dissemination of World Wide Web, and combinations of foreground and background colors are crucial in providing high visibility. In our previous studies, the visibilities of several web-safe color combinations were examined using a psychological method. In those studies, simple stimuli were used because of experimental restriction. In this paper, legibility of sentences on web sites was examined using a psychophisiological method, point-of-regard measurement, to obtain other practical data. Ten people with normal color sensations ranging from ages 21 to 29 were recruited. The number of characters per line in each page was arranged in the same number, and the four representative achromatic web-safe colors, that is, #000000, #666666, #999999 and #CCCCCC, were examined. The reading time per character and the gaze time per line were obtained from point-of-regard measurement, and the normalized with the reading time and the gaze time of the three colors were calculated and compared. As the results, it was shown that the time of reading and gaze become long at the same ratio when the contrast decreases by point-of-regard measurement. Therefore, it was indicated that the legibility of color combinations could be estimated by point-of-regard measurement.
An Automated Method for Landmark Identification and Finite-Element Modeling of the Lumbar Spine.
Campbell, Julius Quinn; Petrella, Anthony J
2015-11-01
The purpose of this study was to develop a method for the automated creation of finite-element models of the lumbar spine. Custom scripts were written to extract bone landmarks of lumbar vertebrae and assemble L1-L5 finite-element models. End-plate borders, ligament attachment points, and facet surfaces were identified. Landmarks were identified to maintain mesh correspondence between meshes for later use in statistical shape modeling. 90 lumbar vertebrae were processed creating 18 subject-specific finite-element models. Finite-element model surfaces and ligament attachment points were reproduced within 1e-5 mm of the bone surface, including the critical contact surfaces of the facets. Element quality exceeded specifications in 97% of elements for the 18 models created. The current method is capable of producing subject-specific finite-element models of the lumbar spine with good accuracy, quality, and robustness. The automated methods developed represent advancement in the state of the art of subject-specific lumbar spine modeling to a scale not possible with prior manual and semiautomated methods.
Analysis of free modeling predictions by RBO aleph in CASP11.
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2016-09-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue-residue contact prediction by EPC-map and contact-guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. Proteins 2016; 84(Suppl 1):87-104. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
Determination of piezo-optic coefficients of crystals by means of four-point bending.
Krupych, Oleg; Savaryn, Viktoriya; Krupych, Andriy; Klymiv, Ivan; Vlokh, Rostyslav
2013-06-10
A technique developed recently for determining piezo-optic coefficients (POCs) of isotropic optical media, which represents a combination of digital imaging laser interferometry and a classical four-point bending method, is generalized and applied to a single-crystalline anisotropic material. The peculiarities of measuring procedures and data processing for the case of optically uniaxial crystals are described in detail. The capabilities of the technique are tested on the example of canonical nonlinear optical crystal LiNbO3. The high precision achieved in determination of the POCs for isotropic and anisotropic materials testifies that the technique should be both versatile and reliable.
GEOS 3 data processing for the recovery of geoid undulations and gravity anomalies
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1979-01-01
The paper discusses the analysis of GEOS 3 altimeter data for the determination of geoid heights and point and mean gravity anomalies. Methods are presented for determining the mean anomalies and mean undulations from the GEOS 3 altimeter data available by the end of September 1977 without having a complete set of precise orbits. The editing of the data is extensive to remove questionable data, although no filtering of the data is carried out. An adjustment process is carried out to eliminate orbit error and altimeter bias. Representative point anomaly values are computed to investigate anomaly behavior across the Bonin Trench and over the Patton seamounts.
Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.
2008-01-01
The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.
Multistate metadynamics for automatic exploration of conical intersections
NASA Astrophysics Data System (ADS)
Lindner, Joachim O.; Röhr, Merle I. S.; Mitrić, Roland
2018-05-01
We introduce multistate metadynamics for automatic exploration of conical intersection seams between adiabatic Born-Oppenheimer potential energy surfaces in molecular systems. By choosing the energy gap between the electronic states as a collective variable the metadynamics drives the system from an arbitrary ground-state configuration toward the intersection seam. Upon reaching the seam, the multistate electronic Hamiltonian is extended by introducing biasing potentials into the off-diagonal elements, and the molecular dynamics is continued on a modified potential energy surface obtained by diagonalization of the latter. The off-diagonal bias serves to locally open the energy gap and push the system to the next intersection point. In this way, the conical intersection energy landscape can be explored, identifying minimum energy crossing points and the barriers separating them. We illustrate the method on the example of furan, a prototype organic molecule exhibiting rich photophysics. The multistate metadynamics reveals plateaus on the conical intersection energy landscape from which the minimum energy crossing points with characteristic geometries can be extracted. The method can be combined with the broad spectrum of electronic structure methods and represents a generally applicable tool for the exploration of photophysics and photochemistry in complex molecules and materials.
Circular Data Images for Directional Data
NASA Technical Reports Server (NTRS)
Morpet, William J.
2004-01-01
Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
U.S. Navy Regional Climatic Study of the Mozambique Channel and Adjacent Waters
1989-07-01
vector 5iur if....7M. nalc2to rl movements f troica cyclone wace ers* kot w oit o t his rose ore hosed on 277 tg ele hour4 I Il/r, \\movements In co p n50... method best represents the climate. At this point, however, it is possible only to bring the issue to the attention of the data users. Even without the...temperatures are recorded with a fairly high frequency in marine observations. The principle methods for sampling are with ship water-intake thermometers and
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
NASA Astrophysics Data System (ADS)
Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.
2013-10-01
This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
NASA Technical Reports Server (NTRS)
Jones, Robert T
1937-01-01
A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.
Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1989-01-01
Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.
NASA Astrophysics Data System (ADS)
Coleman, Victoria A.; Jämting, Åsa K.; Catchpoole, Heather J.; Roy, Maitreyee; Herrmann, Jan
2011-10-01
Nanoparticles and products incorporating nanoparticles are a growing branch of nanotechnology industry. They have found a broad market, including the cosmetic, health care and energy sectors. Accurate and representative determination of particle size distributions in such products is critical at all stages of the product lifecycle, extending from quality control at point of manufacture to environmental fate at the point of disposal. Determination of particle size distributions is non-trivial, and is complicated by the fact that different techniques measure different quantities, leading to differences in the measured size distributions. In this study we use both mono- and multi-modal dispersions of nanoparticle reference materials to compare and contrast traditional and novel methods for particle size distribution determination. The methods investigated include ensemble techniques such as dynamic light scattering (DLS) and differential centrifugal sedimentation (DCS), as well as single particle techniques such as transmission electron microscopy (TEM) and microchannel resonator (ultra high-resolution mass sensor).
GPU computing of compressible flow problems by a meshless method with space-filling curves
NASA Astrophysics Data System (ADS)
Ma, Z. H.; Wang, H.; Pu, S. H.
2014-04-01
A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations
NASA Astrophysics Data System (ADS)
Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.
2018-02-01
Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.
NASA Astrophysics Data System (ADS)
Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.
2012-03-01
This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.
2011-12-01
Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.
Mantell, Joanne E.; Correale, Jacqueline; Adams-Skinner, Jessica; Stein, Zena A.
2011-01-01
Religious and secular institutions advocate strategies that represent all points on the continuum to reduce the spread of HIV/AIDS. Drawing on an extensive literature review of studies conducted in sub-Saharan Africa, we focus on those secular institutions that support all effective methods of reducing HIV/AIDS transmission and those conservative religious institutions that support a limited set of prevention methods. We conclude by identifying topics for dialogue between these viewpoints that should facilitate cooperation by expanding the generally acceptable HIV/AIDS prevention methods, and especially the use of condoms. PMID:21834733
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine
Liu, Zhiyuan; Wang, Changhui
2015-01-01
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675
Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions
NASA Astrophysics Data System (ADS)
Leiderman, Karin; Olson, Sarah D.
2016-02-01
The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.
Investigations of turbulent scalar fields using probability density function approach
NASA Technical Reports Server (NTRS)
Gao, Feng
1991-01-01
Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.
Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Dynamic control modification techniques in teleoperation of a flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Magee, David Patrick
1991-01-01
The objective of this research is to reduce the end-point vibration of a large, teleoperated manipulator while preserving the usefulness of the system motion. A master arm is designed to measure desired joint angles as the user specifies a desired tip motion. The desired joint angles from the master arm are the inputs to an adaptive PD control algorithm that positions the end-point of the manipulator. As the user moves the tip of the master, the robot will vibrate at its natural frequencies which makes it difficult to position the end-point. To eliminate the tip vibration during teleoperated motions, an input shaping method is presented. The input shaping method transforms each sample of the desired input into a new set of impulses that do not excite the system resonances. The method is explained using the equation of motion for a simple, second-order system. The impulse response of such a system is derived and the constraint equations for vibrationless motion are presented. To evaluate the robustness of the method, a different residual vibration equation from Singer's is derived that more accurately represents the input shaping technique. The input shaping method is shown to actually increase the residual vibration in certain situations when the system parameters are not accurately specified. Finally, the implementation of the input shaping method to a system with varying parameters is shown to induce a vibration into the system. To eliminate this vibration, a modified command shaping technique is developed. The ability of the modified command shaping method to reduce vibration at the system resonances is tested by varying input perturbations to trajectories in a range of possible user inputs. By comparing the frequency responses of the transverse acceleration at the end-point of the manipulator, the modified method is compared to the original PD routine. The control scheme that produces the smaller magnitude of resonant vibration at the first natural frequency is considered the more effective control method.
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
Integration of Geodata in Documenting Castle Ruins
NASA Astrophysics Data System (ADS)
Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.
2016-06-01
Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
NASA Astrophysics Data System (ADS)
Bhatara, Sevty Satria; Iskandar, Reza Fauzi; Kirom, M. Ramdlan
2016-02-01
Solar energy is one of renewable energy resource where needs a photovoltaic module to convert it into electrical energy. One of the problems on solar energy conversion is the process of battery charging. To improve efficiency of energy conversion, PV system needs another control method on battery charging called maximum power point tracking (MPPT). This paper report the study on charging optimation using constant voltage (CV) method. This method has a function of determining output voltage of the PV system on maximal condition, so PV system will always produce a maximal energy. A model represented a PV system with and without MPPT was developed using Simulink. PV system simulation showed a different outcome energy when different solar radiation and numbers of solar module were applied in the model. On the simulation of solar radiation 1000 W/m2, PV system with MPPT produces 252.66 Watt energy and PV system without MPPT produces 252.66 Watt energy. The larger the solar radiation, the greater the energy of PV modules was produced.
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Finding Out Critical Points For Real-Time Path Planning
NASA Astrophysics Data System (ADS)
Chen, Wei
1989-03-01
Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.
La Barbera, Luigi; Galbusera, Fabio; Wilke, Hans-Joachim; Villa, Tomaso
2016-09-01
To discuss whether the available standard methods for preclinical evaluation of posterior spine stabilization devices can represent basic everyday life activities and how to compare the results obtained with different procedures. A comparative finite element study compared ASTM F1717 and ISO 12189 standards to validated instrumented L2-L4 segments undergoing standing, upper body flexion and extension. The internal loads on the spinal rod and the maximum stress on the implant are analysed. ISO recommended anterior support stiffness and force allow for reproducing bending moments measured in vivo on an instrumented physiological segment during upper body flexion. Despite the significance of ASTM model from an engineering point of view, the overly conservative vertebrectomy model represents an unrealistic worst case scenario. A method is proposed to determine the load to apply on assemblies with different anterior support stiffnesses to guarantee a comparable bending moment and reproduce specific everyday life activities. The study increases our awareness on the use of the current standards to achieve meaningful results easy to compare and interpret.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Global point signature for shape analysis of carpal bones
NASA Astrophysics Data System (ADS)
Chaudhari, Abhijit J.; Leahy, Richard M.; Wise, Barton L.; Lane, Nancy E.; Badawi, Ramsey D.; Joshi, Anand A.
2014-02-01
We present a method based on spectral theory for the shape analysis of carpal bones of the human wrist. We represent the cortical surface of the carpal bone in a coordinate system based on the eigensystem of the two-dimensional Helmholtz equation. We employ a metric—global point signature (GPS)—that exploits the scale and isometric invariance of eigenfunctions to quantify overall bone shape. We use a fast finite-element-method to compute the GPS metric. We capitalize upon the properties of GPS representation—such as stability, a standard Euclidean (ℓ2) metric definition, and invariance to scaling, translation and rotation—to perform shape analysis of the carpal bones of ten women and ten men from a publicly-available database. We demonstrate the utility of the proposed GPS representation to provide a means for comparing shapes of the carpal bones across populations.
NASA Astrophysics Data System (ADS)
Jha, S. K.; Brockman, R. A.; Hoffman, R. M.; Sinha, V.; Pilchak, A. L.; Porter, W. J.; Buchanan, D. J.; Larsen, J. M.; John, R.
2018-05-01
Principal component analysis and fuzzy c-means clustering algorithms were applied to slip-induced strain and geometric metric data in an attempt to discover unique microstructural configurations and their frequencies of occurrence in statistically representative instantiations of a titanium alloy microstructure. Grain-averaged fatigue indicator parameters were calculated for the same instantiation. The fatigue indicator parameters strongly correlated with the spatial location of the microstructural configurations in the principal components space. The fuzzy c-means clustering method identified clusters of data that varied in terms of their average fatigue indicator parameters. Furthermore, the number of points in each cluster was inversely correlated to the average fatigue indicator parameter. This analysis demonstrates that data-driven methods have significant potential for providing unbiased determination of unique microstructural configurations and their frequencies of occurrence in a given volume from the point of view of strain localization and fatigue crack initiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Bernard, D.; Dos Santos, N.
This paper describes the method to define relevant targeted integral measurements that allow the improvement of nuclear data evaluations and the determination of corresponding reliable covariances. {sup 235}U and {sup 56}Fe examples are pointed out for the improvement of JEFF3 data. Utilizations of these covariances are shown for Sensitivity and Representativity studies, Uncertainty calculations, and Transposition of experimental results to industrial applications. S/U studies are more and more used in Reactor Physics and Safety-Criticality. However, the reliability of study results relies strongly on the ND covariance relevancy. Our method derives the real uncertainty associated with each evaluation from calibration onmore » targeted integral measurements. These realistic covariance matrices allow reliable JEFF3.1.1 calculation of prior uncertainty due to nuclear data, as well as uncertainty reduction based on representative integral experiments, in challenging design calculations such as GEN3 and RJH reactors.« less
The bridge between two worlds: psychoanalysis and fMRI.
Marini, Stefano; Di Tizio, Laura; Dezi, Sira; Armuzzi, Silvia; Pelaccia, Simona; Valchera, Alessandro; Sepede, Gianna; Girinelli, Gabriella; De Berardis, Domenico; Martinotti, Giovanni; Gambi, Francesco; Di Giannantonio, Massimo
2016-02-01
In recent years, a connection between psychoanalysis and neuroscience has been sought. The meeting point between these two branches is represented by neuropsychoanalysis. The goal of the relationship between psychoanalysis and neuroscience is to test psychoanalytic hypotheses in the human brain, using a scientific method. A literature search was conducted on May 2015. PubMed and Scopus databases were used to find studies for the inclusion in the systematic review. Common results of the studies investigated are represented by a reduction, a modulation, or a normalization of the activation patterns found after the psychoanalytic therapy. New findings in the possible and useful relationship between psychoanalysis and neuroscience could change the modalities of relating to patients for psychoanalysts and the way in which neuroscientists plan their research. Researchers should keep in mind that in any scientific research that has to do with people, neuroscience and a scientific method cannot avoid subjective interpretation.
Do trigeminal autonomic cephalalgias represent primary diagnoses or points on a continuum?
Charleston, Larry
2015-06-01
The question of whether the trigeminal autonomic cephalalgias (TACs) represent primary diagnoses or points on a continuum has been debatable for a number of years. Patients with TACs may present with similar clinical characteristics, and occasionally, TACS respond to similar treatments. Prima facie, these disorders may seem to be intimately related. However, due to the current evidence, it would be challenging to accurately conclude whether they represent different primary headache diagnoses or the same primary headache disorder represented by different points on the same continuum. Ultimately, the TACs may utilize similar pathways and activate nociceptive responses that result in similar clinical phenotypes but "original and initiating" etiology may differ, and these disorders may not be points on the same continuum. This paper seeks to provide a brief comparison of TACs via diagnostic criteria, secondary causes, brief overview of pathophysiology, and the use of some key treatments and their mechanism of actions to illustrate the TAC similarities and differences.
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-06-15
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less
Acceleration environment of payloads while being handled by the Shuttle Remote Manipulator System
NASA Technical Reports Server (NTRS)
Turnbull, J. F.
1983-01-01
Described in this paper is the method used in the Draper Remote Manipulator System (RMS) Simulation to compute linear accelerations at the point on the SPAS01 payload where its accelerometers are mounted. Simulated accelerometer output for representative on-orbit activities is presented. The objectives of post-flight analysis of SPAS01 data are discussed. Finally, the point is made that designers of acceleration-dependent payloads may have an interest in the capability of simulating the acceleration environment of payloads while under the control of the overall Payload Deployment and retrieval System (PDRS) that includes the Orbiter and its attitude control system as well as the Remote Manipulator Arm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
Efficient Open Source Lidar for Desktop Users
NASA Astrophysics Data System (ADS)
Flanagan, Jacob P.
Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).
NASA Astrophysics Data System (ADS)
Molero, B.; Leroux, D. J.; Richaume, P.; Kerr, Y. H.; Merlin, O.; Cosh, M. H.; Bindlish, R.
2018-01-01
We conduct a novel comprehensive investigation that seeks to prove the connection between spatial scales and timescales in surface soil moisture (SM) within the satellite footprint ( 50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies at timescales ranging from 0.5 to 128 days, using wavelet transforms. Then, their degree of spatial representativeness is evaluated on a per-timescale basis by comparison to large spatial scale data sets (the in situ spatial average, SMOS, AMSR2, and ECMWF). Four methods are used for this: temporal stability analysis (TStab), triple collocation (TC), percentage of correlated areas (CArea), and a new proposed approach that uses wavelet-based correlations (WCor). We found that the mean of the spatial representativeness values tends to increase with the timescale but so does their dispersion. Locations exhibit poor spatial representativeness at scales below 4 days, while either very good or poor representativeness at seasonal scales. Regarding the methods, TStab cannot be applied to the anomaly series due to their multiple zero-crossings, and TC is suitable for week and month scales but not for other scales where data set cross-correlations are found low. In contrast, WCor and CArea give consistent results at all timescales. WCor is less sensitive to the spatial sampling density, so it is a robust method that can be applied to sparse networks (one station per footprint). These results are promising to improve the validation and downscaling of satellite SM series and the optimization of SM networks.
Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.
Researching Together: A CTSA Partnership of Academicians and Communities for Translation
Nearing, Kathryn; Felzien, Maret; Green, Larry; Calonge, Ned; Pineda‐Reyes, Fernando; Jones, Grant; Tamez, Montelle; Miller, Sara; Kramer, Andrew
2013-01-01
Abstract Background The Colorado Clinical and Translational Sciences Institute (CCTSI) aims to translate discovery into clinical practice. The Partnership of Academicians and Communities for Translation (PACT) represents a robust campus–community partnership. Methods The CCTSI collected data on all PACT activities including meeting notes, staff activity logs, stakeholder surveys and interviews, and several key component in‐depth evaluations. Data analysis by Evaluation and Community Engagement Core and PACT Council members identified critical shifts that changed the trajectory of community engagement efforts. Results Ten “critical shifts” in six broad rubrics created change in the PACT. Critical shifts were decision points in the development of the PACT that represented quantitative and qualitative changes in the work and trajectory. Critical shifts occurred in PACT management and leadership, financial control and resource allocation, and membership and voice. Discussion The development of a campus–community partnership is not a smooth linear path. Incremental changes lead to major decision points that represent an opportunity for critical shifts in developmental trajectory. We provide an enlightening, yet cautionary, tale to others considering a campus–community partnership so they may prepare for crucial decisions and critical shifts. The PACT serves as a genuine foundational platform for dynamic research efforts aimed at eliminating health disparities. PMID:24127922
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier–Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in themore » street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H d/H u) and street canyon-width to building-width aspect ratio (S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Ultimately, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. But, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.« less
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.; ...
2017-05-18
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier–Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in themore » street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H d/H u) and street canyon-width to building-width aspect ratio (S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Ultimately, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. But, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.« less
NASA Astrophysics Data System (ADS)
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.; Harman, Todd; Nelson, Matthew A.; Brown, Michael J.; Pardyjak, Eric R.
2017-08-01
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier-Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in the street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H_d/H_u) and street canyon-width to building-width aspect ratio ( S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Overall, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. However, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.
a Weighted Closed-Form Solution for Rgb-D Data Registration
NASA Astrophysics Data System (ADS)
Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.
2016-06-01
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
Factors That Influence the Rating of Perceived Exertion After Endurance Training.
Roos, Lilian; Taube, Wolfgang; Tuch, Carolin; Frei, Klaus Michael; Wyss, Thomas
2018-03-15
Session rating of perceived exertion (sRPE) is an often used measure to assess athletes' training load. However, little is known which factors could optimize the quality of data collection thereof. The aim of the present study was to investigate the effects of (i) the survey methods and (ii) the time points when sRPE was assessed on the correlation between subjective (sRPE) and objective (heart rate training impulse; TRIMP) assessment of training load. In the first part, 45 well-trained subjects (30 men, 15 women) performed 20 running sessions with a heart rate monitor and reported sRPE 30 minutes after training cessation. For the reporting the subjects were grouped into three survey method groups (paper-pencil, online questionnaire, and mobile device). In the second part of the study, another 40 athletes (28 men, 12 women) performed 4x5 running sessions with the four time points to report the sRPE randomly assigned (directly after training cessation, 30 minutes post-exercise, in the evening of the same day, the next morning directly after waking up). The assessment of sRPE is influenced by time point, survey method, TRIMP, sex, and training type. It is recommended to assess sRPE values via a mobile device or online tool, as the survey method "paper" displayed lower correlations between sRPE and TRIMP. Subjective training load measures are highly individual. When compared at the same relative intensity, lower sRPE values were reported by women, for the training types representing slow runs, and for time points with greater duration between training cessation and sRPE assessment. The assessment method for sRPE should be kept constant for each athlete and comparisons between athletes or sexes are not recommended.
Nissanholtz-Gannot, Rachel; Shani, Segev; Shvarts, Shifra
2010-11-01
The relationship between doctors and pharmaceutical companies is an integral part of the health system in Israel and the whole world. The mutual need for such a relationship requires us, as a society, to examine its influence on the individual and the system as a whole. This research examines the relationship from the points of view of the relevant parties within the health system and outside the health system (decision-makers). The authors used in-depth interviews and qualitative research methods in order to examine and understand the various positions of decision-makers. The position of the decision-makers, regarding all the aspects of this relationship, expresses their wishes and depends on their point of view. The impact of the relationship between the doctors and the pharmaceutical companies was examined with regard to the prescription behavior of the doctor. All the government representatives, all the physicians' representatives and those of the health funds, believe that the physicians' prescription behavior is impacted by the relationship. There are those who perceive this to be a negative trend and some doctors believe it to be a positive trend. With regard to possible harm to the patient, the parties believe that the relationship does not harm the patient, whereas most of the government representatives identify harm to the patients, both on the economic and health levels. The authors believe that the "influence" which exists or could exist on the part of the pharmaceutical companies is the main stumbling block in this relationship, which is expressed in the decision-makers' perspective.
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Reconstruction of three-dimensional porous media using a single thin section
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2012-06-01
The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.
Space shuttle main engine plume radiation model
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1978-01-01
The methods are described which are used in predicting the thermal radiation received by space shuttles, from the plumes of the main engines. Radiation to representative surface locations were predicted using the NASA gaseous plume radiation GASRAD program. The plume model is used with the radiative view factor (RAVFAC) program to predict sea level radiation at specified body points. The GASRAD program is described along with the predictions. The RAVFAC model is also discussed.
Fundamental Theory of Crystal Decomposition
1991-05-01
rather than combine them as is often the case in a computation based on the density functional method.4 In the Case of a cluster embedded in a...classical lattice, special care needs to be taken to ensure that mathematical consistency is achieved between the cluster and the embedding lattice. This has...localizing potential or KKLP. Simulation of a large crystallite or an infinite lattice containing a point defect represented by a cluster and a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Finding Limit Cycles in self-excited oscillators with infinite-series damping functions
NASA Astrophysics Data System (ADS)
Das, Debapriya; Banerjee, Dhruba; Bhattacharjee, Jayanta K.
2015-03-01
In this paper we present a simple method for finding the location of limit cycles of self excited oscillators whose damping functions can be represented by some infinite convergent series. We have used standard results of first-order perturbation theory to arrive at amplitude equations. The approach has been kept pedagogic by first working out the cases of finite polynomials using elementary algebra. Then the method has been extended to various infinite polynomials, where the fixed points of the corresponding amplitude equations cannot be found out. Hopf bifurcations for systems with nonlinear powers in velocities have also been discussed.
Saddle point localization of molecular wavefunctions.
Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W
2016-09-15
The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.
NASA Astrophysics Data System (ADS)
Riveiro, B.; DeJong, M.; Conde, B.
2016-06-01
Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.
SU-E-J-237: Image Feature Based DRR and Portal Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X; Chang, J
Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.
2015-01-01
Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182
Multiple-Point statistics for stochastic modeling of aquifers, where do we stand?
NASA Astrophysics Data System (ADS)
Renard, P.; Julien, S.
2017-12-01
In the last 20 years, multiple-point statistics have been a focus of much research, successes and disappointments. The aim of this geostatistical approach was to integrate geological information into stochastic models of aquifer heterogeneity to better represent the connectivity of high or low permeability structures in the underground. Many different algorithms (ENESIM, SNESIM, SIMPAT, CCSIM, QUILTING, IMPALA, DEESSE, FILTERSIM, HYPPS, etc.) have been and are still proposed. They are all based on the concept of a training data set from which spatial statistics are derived and used in a further step to generate conditional realizations. Some of these algorithms evaluate the statistics of the spatial patterns for every pixel, other techniques consider the statistics at the scale of a patch or a tile. While the method clearly succeeded in enabling modelers to generate realistic models, several issues are still the topic of debate both from a practical and theoretical point of view, and some issues such as training data set availability are often hindering the application of the method in practical situations. In this talk, the aim is to present a review of the status of these approaches both from a theoretical and practical point of view using several examples at different scales (from pore network to regional aquifer).
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Brennan, John
2015-06-01
Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Fine and Lightweight Papers from Purchased Pulp Subcategory § 430.113 Effluent limitations... existing point source subject to this subpart shall achieve the following effluent limitations representing...
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.123 Effluent... existing point source subject to this subpart shall achieve the following effluent limitations representing...
Ziegler, Ildikó; Borbély-Jakab, Judit; Sugó, Lilla; Kovács, Réka J
2017-01-01
In this case study, the principles of quality risk management were applied to review sampling points and monitoring frequencies in the hormonal tableting unit of a formulation development pilot plant. In the cleanroom area, premises of different functions are located. Therefore a general method was established for risk evaluation based on the Hazard Analysis and Critical Control Points (HACCP) method to evaluate these premises (i.e., production area itself and ancillary clean areas) from the point of view of microbial load and state in order to observe whether the existing monitoring program met the emerged advanced monitoring practice. LAY ABSTRACT: In pharmaceutical production, cleanrooms are needed for the manufacturing of final dosage forms of drugs-intended for human or veterinary use-in order to protect the patient's weakened body from further infections. Cleanrooms are premises with a controlled level of contamination that is specified by the number of particles per cubic meter at a specified particle size or number of microorganisms (i.e. microbial count) per surface area. To ensure a low microbial count over time, microorganisms are detected and counted by environmental monitoring methods regularly. It is reasonable to find the easily infected places by risk analysis to make sure the obtained results really represent the state of the whole room. This paper presents a risk analysis method for the optimization of environmental monitoring and verification of the suitability of the method. © PDA, Inc. 2017.
Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque
2017-08-01
Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Strategies to Evaluate the Visibility Along AN Indoor Path in a Point Cloud Representation
NASA Astrophysics Data System (ADS)
Grasso, N.; Verbree, E.; Zlatanova, S.; Piras, M.
2017-09-01
Many research works have been oriented to the formulation of different algorithms for estimating the paths in indoor environments from three-dimensional representations of space. The architectural configuration, the actions that take place within it, and the location of some objects in the space influence the paths along which is it possible to move, as they may cause visibility problems. To overcome the visibility issue, different methods have been proposed which allow to identify the visible areas and from a certain point of view, but often they do not take into account the user's visual perception of the environment and not allow estimating how much may be complicated to follow a certain path. In the field of space syntax and cognitive science, it has been attempted to describe the characteristics of a building or an urban environment by the isovists and visibility graphs methods; some numerical properties of these representations allow to describe the space as for how it is perceived by a user. However, most of these studies are directed to analyze the environment in a two-dimensional space. In this paper we propose a method to evaluate in a quantitative way the complexity of a certain path within an environment represented by a three-dimensional point cloud, by the combination of some of the previously mentioned techniques, considering the space visible from a certain point of view, depending on the moving agent (pedestrian , people in wheelchairs, UAV, UGV, robot).
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J.; Moser, A.
1975-01-01
A very general method for calculating compressible three-dimensional laminar and turbulent boundary layers on arbitrary wings is described. The method utilizes a nonorthogonal coordinate system for the boundary-layer calculations and includes a geometry package that represents the wing analytically. In the calculations all the geometric parameters of the coordinate system are accounted for. The Reynolds shear-stress terms are modeled by an eddy-viscosity formulation developed by Cebeci. The governing equations are solved by a very efficient two-point finite-difference method used earlier by Keller and Cebeci for two-dimensional flows and later by Cebeci for three-dimensional flows.
MatchingLand, geospatial data testbed for the assessment of matching methods.
Xavier, Emerson M A; Ariza-López, Francisco J; Ureña-Cámara, Manuel A
2017-12-05
This article presents datasets prepared with the aim of helping the evaluation of geospatial matching methods for vector data. These datasets were built up from mapping data produced by official Spanish mapping agencies. The testbed supplied encompasses the three geometry types: point, line and area. Initial datasets were submitted to geometric transformations in order to generate synthetic datasets. These transformations represent factors that might influence the performance of geospatial matching methods, like the morphology of linear or areal features, systematic transformations, and random disturbance over initial data. We call our 11 GiB benchmark data 'MatchingLand' and we hope it can be useful for the geographic information science research community.
Exploration Opportunity Search of Near-earth Objects Based on Analytical Gradients
NASA Astrophysics Data System (ADS)
Ren, Yuan; Cui, Ping-Yuan; Luan, En-Jie
2008-07-01
The problem of search of opportunity for the exploration of near-earth minor objects is investigated. For rendezvous missions, the analytical gradients of the performance index with respect to the free parameters are derived using the variational calculus and the theory of state-transition matrix. After generating randomly some initial guesses in the search space, the performance index is optimized, guided by the analytical gradients, leading to the local minimum points representing the potential launch opportunities. This method not only keeps the global-search property of the traditional method, but also avoids the blindness in the latter, thereby increasing greatly the computing speed. Furthermore, with this method, the searching precision could be controlled effectively.
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Multi-point contact of the high-speed vehicle-turnout system dynamics
NASA Astrophysics Data System (ADS)
Ren, Zunsong
2013-05-01
The wheel-rail contact problems, such as the number, location and the track of contact patches, are very important for optimizing the spatial structure of the rails and lowering the vehicle-turnout system dynamics. However, the above problems are not well solved currently because of having the difficulties in how to determine the multi-contact, to preciously present the changeable profiles of the rails and to establish an accurate spatial turnout system dynamics model. Based on a high-speed vehicle-turnout coupled model in which the track is modeled as flexible with rails and sleepers represented by beams, the line tracing extreme point method is introduced to investigate the wheel-rail multiple contact conditions and the key sections of the blade rail, longer nose rail, shorter rail in the switch and nose rail area are discretized to represent the varying profiles of rails in the turnout. The dynamic interaction between the vehicle and turnout is simulated for cases of the vehicle divergently passing the turnout and the multi-point contact is obtained. The tracks of the contact patches on the top of the rails are presented and the wheel-rail impact forces are offered in comparison with the contact patches transference on the rails. The numerical simulation results indicate that the length of two-point contact occurrence of a worn wheel profile and rails is longer than that of the new wheel profile and rails; The two-point contact definitely occurs in the switch and crossing area. Generally, three-point contact doesn't occur for the new rail profile, which is testified by the wheel-rails interpolation distance and the first order derivative function of the tracing line extreme points. The presented research is not only helpful to optimize the structure of the turnout, but also useful to lower the dynamics of the high speed vehicle-turnout system.
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skraba, Primoz; Rosen, Paul; Wang, Bei
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Gradient-free determination of isoelectric points of proteins on chip.
Łapińska, Urszula; Saar, Kadi L; Yates, Emma V; Herling, Therese W; Müller, Thomas; Challa, Pavan K; Dobson, Christopher M; Knowles, Tuomas P J
2017-08-30
The isoelectric point (pI) of a protein is a key characteristic that influences its overall electrostatic behaviour. The majority of conventional methods for the determination of the isoelectric point of a molecule rely on the use of spatial gradients in pH, although significant practical challenges are associated with such techniques, notably the difficulty in generating a stable and well controlled pH gradient. Here, we introduce a gradient-free approach, exploiting a microfluidic platform which allows us to perform rapid pH change on chip and probe the electrophoretic mobility of species in a controlled field. In particular, in this approach, the pH of the electrolyte solution is modulated in time rather than in space, as in the case for conventional determinations of the isoelectric point. To demonstrate the general approachability of this platform, we have measured the isoelectric points of representative set of seven proteins, bovine serum albumin, β-lactoglobulin, ribonuclease A, ovalbumin, human transferrin, ubiquitin and myoglobin in microlitre sample volumes. The ability to conduct measurements in free solution thus provides the basis for the rapid determination of isoelectric points of proteins under a wide variety of solution conditions and in small volumes.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.
Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
Skraba, Primoz; Rosen, Paul; Wang, Bei; ...
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Methods and limitations in radar target imagery
NASA Astrophysics Data System (ADS)
Bertrand, P.
An analytical examination of the reflectivity of radar targets is presented for the two-dimensional case of flat targets. A complex backscattering coefficient is defined for the amplitude and phase of the received field in comparison with the emitted field. The coefficient is dependent on the frequency of the emitted signal and the orientation of the target with respect to the transmitter. The target reflection is modeled in terms of the density of illumined, colored points independent from one another. The target therefore is represented as an infinite family of densities indexed by the observational angle. Attention is given to the reflectivity parameters and their distribution function, and to the conjunct distribution function for the color, position, and the directivity of bright points. It is shown that a fundamental ambiguity exists between the localization of the illumined points and the determination of their directivity and color.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Kapustin, P.; Svetukhin, V.; Tikhonchev, M.
2017-06-01
The atomic displacement cascade simulations near symmetric tilt grain boundaries (GBs) in hexagonal close packed-Zirconium were considered in this paper. Further defect structure analysis was conducted. Four symmetrical tilt GBs -∑14?, ∑14? with the axis of rotation [0 0 0 1] and ∑32?, ∑32? with the axis of rotation ? - were considered. The molecular dynamics method was used for atomic displacement cascades' simulation. A tendency of the point defects produced in the cascade to accumulate near the GB plane, which was an obstacle to the spread of the cascade, was discovered. The results of the point defects' clustering produced in the cascade were obtained. The clusters of both types were represented mainly by single point defects. At the same time, vacancies formed clusters of a large size (more than 20 vacancies per cluster), while self-interstitial atom clusters were small-sized.
Reconstruction of three-dimensional porous media using generative adversarial neural networks
NASA Astrophysics Data System (ADS)
Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.
2017-10-01
To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Temporal trends in and influence of wind on PAH concentrations measured near the Great Lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cortes, D.R.; Basu, I.; Sweet, C.W.
2000-02-01
This paper reports on temporal trends in gas- and particle-phase PAH concentrations measured at three sites in the Great Lakes' Integrated Atmospheric Deposition Network: Eagle Harbor, near Lake Superior, Sleeping Bear Dunes, near Lake Michigan, and Sturgeon Point, near Lake Erie. While gas-phase concentrations have been decreasing since 1991 at all sites, particle-phase concentrations have been decreasing only at Sleeping Bear Dunes. To determine whether these results represent trends in background levels or regional emissions, the average concentrations are compared to those found in urban and rural studies. In addition, the influence of local wind direction on PAH concentrations ismore » investigated, with the assumption that dependence on wind direction implies regional sources. Using these two methods, it is found that PAH concentrations at Eagle Harbor and Sleeping Bear Dunes represent regional background levels but that PAH from the Buffalo Region intrude on the background levels measured at the Sturgeon Point site. At this site, wind from over Lake Erie reduces local PAH concentrations.« less
Estimating parameter values of a socio-hydrological flood model
NASA Astrophysics Data System (ADS)
Holkje Barendrecht, Marlies; Viglione, Alberto; Kreibich, Heidi; Vorogushyn, Sergiy; Merz, Bruno; Blöschl, Günter
2018-06-01
Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.
Efficient iris recognition by characterizing key local variations.
Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin
2004-06-01
Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.
Location of planar targets in three space from monocular images
NASA Technical Reports Server (NTRS)
Cornils, Karin; Goode, Plesent W.
1987-01-01
Many pieces of existing and proposed space hardware that would be targets of interest for a telerobot can be represented as planar or near-planar surfaces. Examples include the biostack modules on the Long Duration Exposure Facility, the panels on Solar Max, large diameter struts, and refueling receptacles. Robust and temporally efficient methods for locating such objects with sufficient accuracy are therefore worth developing. Two techniques that derive the orientation and location of an object from its monocular image are discussed and the results of experiments performed to determine translational and rotational accuracy are presented. Both the quadrangle projection and elastic matching techniques extract three-space information using a minimum of four identifiable target points and the principles of the perspective transformation. The selected points must describe a convex polygon whose geometric characteristics are prespecified in a data base. The rotational and translational accuracy of both techniques was tested at various ranges. This experiment is representative of the sensing requirements involved in a typical telerobot target acquisition task. Both techniques determined target location to an accuracy sufficient for consistent and efficient acquisition by the telerobot.
NASA Astrophysics Data System (ADS)
Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.
2015-03-01
A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required to represent the ejected foam. The exponents were evaluated by least squares method from experimental data. The equation is used and it can represent multiple factors in other problems as well; for example, evaluation of fatigue life, creep life, fracture toughness, and structural fracture, as well as optimization functions. The software is rather simplistic. Required inputs are initial value, final value, and an exponent for each factor. The number of factors is open-ended. The value is updated as each factor is evaluated. If a factor goes to zero, the previous value is used in the evaluation.
Underworld: What we set out to do, How far did we get, What did we Learn ? (Invited)
NASA Astrophysics Data System (ADS)
Moresi, L. N.
2013-12-01
Underworld was conceived as a tool for modelling 3D lithospheric deformation coupled with the underlying / surrounding mantle flow. The challenges involved were to find a method capable of representing the complicated, non-linear, history dependent rheology of the near surface as well as being able to model mantle convection, and, simultaneously, to be able to solve the numerical system efficiently. Underworld is a hybrid particle / mesh code reminiscent of the particle-in-cell techniques from the early 1960s. The Underworld team (*) was not the first to use this approach, nor the last, but the team does have considerable experience and much has been learned along the way. The use of a finite element method as the underlying "cell" in which the Lagrangian particles are embedded considerably reduces errors associated with mapping material properties to the cells. The particles are treated as moving quadrature points in computing the stiffness matrix integrals. The decoupling of deformation markers from computation points allows the use of structured meshes, efficient parallel decompositions, and simple-to-code geometric multigrid solution methods. For a 3D code such efficiencies are very important. The elegance of the method is that it can be completely described in a couple of sentences. However, there are some limitations: it is not obvious how to retain this elegance for unstructured or adaptive meshes, arbitrary element types are not sufficiently well integrated by the simple quadrature approach, and swarms of particles representing volumes are usually an inefficient representation of surfaces. This will be discussed ! (*) Although not formally constituted, my co-conspirators in this exercise are listed as the Underworld team and I will reveal their true identities on the day.
Mapping extent and change in surface mines within the United States for 2001 to 2006
Soulard, Christopher E.; Acevedo, William; Stehman, Stephen V.; Parker, Owen P.
2016-01-01
A complete, spatially explicit dataset illustrating the 21st century mining footprint for the conterminous United States does not exist. To address this need, we developed a semi-automated procedure to map the country's mining footprint (30-m pixel) and establish a baseline to monitor changes in mine extent over time. The process uses mine seed points derived from the U.S. Energy Information Administration (EIA), U.S. Geological Survey (USGS) Mineral Resources Data System (MRDS), and USGS National Land Cover Dataset (NLCD) and recodes patches of barren land that meet a “distance to seed” requirement and a patch area requirement before mapping a pixel as mining. Seed points derived from EIA coal points, an edited MRDS point file, and 1992 NLCD mine points were used in three separate efforts using different distance and patch area parameters for each. The three products were then merged to create a 2001 map of moderate-to-large mines in the United States, which was subsequently manually edited to reduce omission and commission errors. This process was replicated using NLCD 2006 barren pixels as a base layer to create a 2006 mine map and a 2001–2006 mine change map focusing on areas with surface mine expansion. In 2001, 8,324 km2 of surface mines were mapped. The footprint increased to 9,181 km2 in 2006, representing a 10·3% increase over 5 years. These methods exhibit merit as a timely approach to generate wall-to-wall, spatially explicit maps representing the recent extent of a wide range of surface mining activities across the country.
Fischer, Claudia; Voss, Andreas
2014-01-01
Hypertensive pregnancy disorders affect 6-8% of gestations representing the most common complication of pregnancy for both mother and fetus. The aim of this study was to introduce a new three-dimensional coupling analysis methods - the three-dimensional segmented Poincaré plot analyses (SPPA3) - to establish an effective approach for the detection of hypertensive pregnancy disorders and especially pre-eclampsia (PE). A cubic box model representing the three-dimensional phase space is subdivided into 12 × 12 × 12 equal predefined cubelets according to the range of the SD of each investigated signal. Additionally, we investigated the influence of rotating the cloud of points and the size of the cubelets (adapted or predefined). All single probabilities of occurring points in a specific cubelet related to the total number of points are calculated. In this study, 10 healthy non-pregnant women, 66 healthy pregnant women, and 56 hypertensive pregnant women (chronic hypertension, pregnancy-induced hypertension, and PE) were investigated. From all subjects, 30 min of beat-to-beat intervals (BBI), respiration (RESP), non-invasive systolic (SBP), and diastolic blood pressure (DBP) were continuously recorded and analyzed. Non-rotated adapted SPPA3 discriminated best between hypertensive pregnancy disorders and PE concerning coupling analysis of two or three different systems (BBI, DBP, RESP and BBI, SBP, DBP) reaching an accuracy of up to 82.9%. This could be increased to an accuracy of up to 91.2% applying multivariate analysis differentiating between all pregnant women and PE. In conclusion, SPPA3 could be a useful method for enhanced risk stratification in pregnant women.
Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael
2016-01-01
It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.
Reliability of the Wii Balance Board in kayak
Vando, Stefano; Laffaye, Guillaume; Masala, Daniele; Falese, Lavinia; Padulo, Johnny
2015-01-01
Summary Background: the seat of the kayaker represent the principal contact point to express mechanical Energy. Methods: therefore we investigated the reliability of the Wii Balance Board measures in the kayak vs. on the ground. Results: Bland-Altman test showed a low systematic bias on the ground (2.85%) and in kayak (−2.13%) respectively; while 0.996 for Intra-class correlation coefficient. Conclusion: the Wii Balance Board is useful to assess postural sway in kayak. PMID:25878987
Methodological challenges to human medical study.
Zhong, Yixin; Liu, Baoyan; Qu, Hua; Xie, Qi
2014-09-01
With the transformation of modern medicinal pattern, medical studies are confronted with methodological challenges. By analyzing two methodologies existing in the study of physical matter system and information system, the article points out that traditional Chinese medicine (TCM), especially the treatment based on syndrome differentiation, embodies information conception of methodological positions, while western medicine represents matter conception of methodological positions. It proposes a new way of thinking about combination of TCM and western medicine by combinating two kinds of methodological methods.
Area collapse algorithm computing new curve of 2D geometric objects
NASA Astrophysics Data System (ADS)
Buczek, Michał Mateusz
2017-06-01
The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.
NASA Astrophysics Data System (ADS)
Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.
1989-05-01
model otherwise. 7. One or more partial differential equations can be presented to describe the minutia of chemical behavior in the various locales of...enjoys a voluminous literature heritage beyond the point of realistic applications to the natural environment in many cases. Hill, Myers, and Brannon...represented by the point A in Figure 2. This point may be representative of the recently deposited and exposed surface of dredged sediment in the delta
On the improvement of blood sample collection at clinical laboratories
2014-01-01
Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.
Point geospatial dataset representing locations of NPDES outfalls/dischargers for facilities which generally represent the site of the discharge. NPDES (National Pollution Discharge Elimination System) is an EPA permit program that regulates direct discharges from treated waste water that is discharged into waters of the US. Facilities are issued NPDES permits regulating their discharge as required by the Clean Water Act. A facility may have one or more dischargers. The location represents the discharge point of a discrete conveyance such as a pipe or man made ditch.
IA and PA network-based computation of coordinating combat behaviors in the military MAS
NASA Astrophysics Data System (ADS)
Xia, Zuxun; Fang, Huijia
2004-09-01
In the military multi-agent system every agent needs to analyze the dependent and temporal relations among the tasks or combat behaviors for working-out its plans and getting the correct behavior sequences, it could guarantee good coordination, avoid unexpected damnification and guard against bungling the change of winning a battle due to the possible incorrect scheduling and conflicts. In this paper IA and PA network based computation of coordinating combat behaviors is put forward, and emphasize particularly on using 5x5 matrix to represent and compute the temporal binary relation (between two interval-events, two point-events or between one interval-event and one point-event), this matrix method makes the coordination computing convenience than before.
Objective assessment of image quality. IV. Application to adaptive optics
Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher
2008-01-01
The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464
Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel. PMID:28141829
Decision-making styles of business and industry: Five insights to improving your sales success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramson, R.M.
1996-04-01
Corporations, like people, have styles-even personalities-that in varied but vital ways affect every decision made at every level in the organization. This report describes five key organizational styles, methods for assessing which style a utility sales representative might encounter, and practical strategies that increase the odds of proposal acceptance. Each style is defined by its (1) goals and priorities, (2) administrative/communicative network, (3) key players, (4) events or circumstances prompting decisions, and (5) typical decision barriers, biases, and selling points. Written in highly readable style, this report provides tools that will help utility representatives proactively overcome organizational sales barriers, shortenmore » selling cycles, reduce sales expense, increase revenue and enhance customer loyalty.« less
Kim, Sangdan; Han, Suhee
2010-01-01
Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.
NASA Astrophysics Data System (ADS)
Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun
2017-08-01
Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.
Evaluation of Rock Surface Characterization by Means of Temperature Distribution
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.
2017-12-01
Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.
Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory
NASA Astrophysics Data System (ADS)
Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.
2017-09-01
Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Precision and Accuracy of a Digital Impression Scanner in Full-Arch Implant Rehabilitation.
Pesce, Paolo; Pera, Francesco; Setti, Paolo; Menini, Maria
To evaluate the accuracy and precision of a digital scanner used to scan four implants positioned according to an immediate loading implant protocol and to assess the accuracy of an aluminum framework fabricated from a digital impression. Five master casts reproducing different edentulous maxillae with four tilted implants were used. Four scan bodies were screwed onto the low-profile abutments, and a digital intraoral scanner was used to perform five digital impressions of each master cast. To assess trueness, a metal framework of the best digital impression was produced with computer-aided design/computer-assisted manufacture (CAD/CAM) technology and passive fit was assessed with the Sheffield test. Gaps between the frameworks and the implant analogs were measured with a stereomicroscope. To assess precision, three-dimensional (3D) point cloud processing software was used to measure the deviations between the five digital impressions of each cast by producing a color map. The deviation values were grouped in three classes, and differences were assessed between class 2 (representing lower discrepancies) and the assembled classes 1 and 3 (representing the higher negative and positive discrepancies, respectively). The frameworks showed a mean gap of < 30 μm (range: 2 to 47 μm). A statistically significant difference was found between the two groups by the 3D point cloud software, with higher frequencies of points in class 2 than in grouped classes 1 and 3 (P < .001). Within the limits of this in vitro study, it appears that a digital impression may represent a reliable method for fabricating full-arch implant frameworks with good passive fit when tilted implants are present.
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kao, David
1999-01-01
The line integral convolution (LIC) technique has been known to be an effective tool for depicting flow patterns in a given vector field. There have been many extensions to make it run faster and reveal useful flow information such as velocity magnitude, motion, and direction. There are also extensions to unsteady flows and 3D vector fields. Surprisingly, none of these extensions automatically highlight flow features, which often represent the most important and interesting physical flow phenomena. In this sketch, a method for highlighting flow direction in LIC images is presented. The method gives an intuitive impression of flow direction in the given vector field and automatically reveals saddle points in the flow.
Cavity master equation for the continuous time dynamics of discrete-spin models.
Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Cavity master equation for the continuous time dynamics of discrete-spin models
NASA Astrophysics Data System (ADS)
Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
Real-time, interactive animation of deformable two- and three-dimensional objects
Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.
2003-06-03
A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.
The Sedov Blast Wave as a Radial Piston Verification Test
Pederson, Clark; Brown, Bart; Morgan, Nathaniel
2016-06-22
The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
NASA Astrophysics Data System (ADS)
Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping
2017-10-01
A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor.
Madrigal, Carlos A; Branch, John W; Restrepo, Alejandro; Mery, Domingo
2017-10-02
Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.
A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor
Branch, John W.
2017-01-01
Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%. PMID:28974037
Ultrasonic thickness measuring and imaging system and method
Bylenok, Paul J.; Patmos, William M.; Wagner, Thomas A.; Martin, Francis H.
1992-08-04
An ultrasonic thickness measuring and imaging system uses an ultrasonic fsed beam probe for measuring thickness of an object, such as a wall of a tube, a computer for controlling movement of the probe in a scanning pattern within the tube and processing an analog signal produced by the probe which is proportional to the tube wall thickness in the scanning pattern, and a line scan recorder for producing a record of the tube wall thicknesses measured by the probe in the scanning pattern. The probe is moved in the scanning pattern to sequentially scan circumferentially the interior tube wall at spaced apart adjacent axial locations. The computer processes the analog signal by converting it to a digital signal and then quantifies the digital signal into a multiplicity of thickness points with each falling in one of a plurality of thickness ranges corresponding to one of a plurality of shades of grey. From the multiplicity of quantified thickness points, a line scan recorder connected to the computer generates a pictorial map of tube wall thicknesses with each quantified thickness point thus being obtained from a minute area, e.g. 0.010 inch by 0.010 inch, of tube wall and representing one pixel of the pictorial map. In the pictorial map of tube wall thicknesses, the pixels represent different wall thicknesses having different shades of grey.
Ultrasonic thickness measuring and imaging system and method
Bylenok, Paul J.; Patmos, William M.; Wagner, Thomas A.; Martin, Francis H.
1992-01-01
An ultrasonic thickness measuring and imaging system uses an ultrasonic fsed beam probe for measuring thickness of an object, such as a wall of a tube, a computer for controlling movement of the probe in a scanning pattern within the tube and processing an analog signal produced by the probe which is proportional to the tube wall thickness in the scanning pattern, and a line scan recorder for producing a record of the tube wall thicknesses measured by the probe in the scanning pattern. The probe is moved in the scanning pattern to sequentially scan circumferentially the interior tube wall at spaced apart adjacent axial locations. The computer processes the analog signal by converting it to a digital signal and then quantifies the digital signal into a multiplicity of thickness points with each falling in one of a plurality of thickness ranges corresponding to one of a plurality of shades of grey. From the multiplicity of quantified thickness points, a line scan recorder connected to the computer generates a pictorial map of tube wall thicknesses with each quantified thickness point thus being obtained from a minute area, e.g. 0.010 inch by 0.010 inch, of tube wall and representing one pixel of the pictorial map. In the pictorial map of tube wall thicknesses, the pixels represent different wall thicknesses having different shades of grey.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
NASA Astrophysics Data System (ADS)
Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.
2014-06-01
This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Gregory H.
2003-08-06
In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less
Slicing Method for curved façade and window extraction from point clouds
NASA Astrophysics Data System (ADS)
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
A Voronoi interior adjacency-based approach for generating a contour tree
NASA Astrophysics Data System (ADS)
Chen, Jun; Qiao, Chaofei; Zhao, Renliang
2004-05-01
A contour tree is a good graphical tool for representing the spatial relations of contour lines and has found many applications in map generalization, map annotation, terrain analysis, etc. A new approach for generating contour trees by introducing a Voronoi-based interior adjacency set concept is proposed in this paper. The immediate interior adjacency set is employed to identify all of the children contours of each contour without contour elevations. It has advantages over existing methods such as the point-in-polygon method and the region growing-based method. This new approach can be used for spatial data mining and knowledge discovering, such as the automatic extraction of terrain features and construction of multi-resolution digital elevation model.
A New Application of the Channel Packet Method for Low Energy 1-D Elastic Scattering
2006-09-01
matter. On a cosmic scale, we wonder if a collision between an asteroid and Earth led to the extinction of the dinosaurs . Collisions are important...in Figure 12. In an effort to have the computation time reasonable was chosen to be for this simulation. In order to represent the intermediate...linear regions joined by the two labeled points. However, based on Figure 13 the two potential functions are reasonably close and so one would not
Shuttle Tethered Aerothermodynamics Research Facility (STARFAC) Instrumentation Requirements
NASA Technical Reports Server (NTRS)
Wood, George M.; Siemers, Paul M.; Carlomagno, Giovanni M.; Hoffman, John
1986-01-01
The instrumentation requirements for the Shuttle Tethered Aerothermodynamic Research Facility (STARFAC) are presented. The typical physical properties of the terrestrial atmosphere are given along with representative atmospheric daytime ion concentrations and the equilibrium and nonequilibrium gas property comparison from a point away from a wall. STARFAC science and engineering measurements are given as are the TSS free stream gas analysis. The potential nonintrusive measurement techniques for hypersonic boundary layer research are outlined along with the quantitative physical measurement methods for aerothermodynamic studies.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
NASA Technical Reports Server (NTRS)
Lawson, John W.; Bauschlicher, Charles W.; Daw, Murray
2011-01-01
Refractory materials such as metallic borides, often considered as ultra high temperature ceramics (UHTC), are characterized by high melting point, high hardness, and good chemical inertness. These materials have many applications which require high temperature materials that can operate with no or limited oxidation. Ab initio, first principles methods are the most accurate modeling approaches available and represent a parameter free description of the material based on the quantum mechanical equations. Using these methods, many of the intrinsic properties of these material can be obtained. We performed ab initio calculations based on density functional theory for the UHTC materials ZrB2 and HfB2. Computational results are presented for structural information (lattice constants, bond lengths, etc), electronic structure (bonding motifs, densities of states, band structure, etc), thermal quantities (phonon spectra, phonon densities of states, specific heat), as well as information about point defects such as vacancy and antisite formation energies.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Kun; Gao, Ziwei, E-mail: zwgao@snnu.edu.cn; Da, Min
Highlights: Black-Right-Pointing-Pointer Highly oriented and well-defined ZnO urchin-like crystals were successfully fabricated by a facile and effective hydrotherm method. Black-Right-Pointing-Pointer Polyvinylpyrrolidone- and hydrogen peroxide-assisted synthesis of ZnO could optimize its crystalline quality and the obtained ZnO have smooth surface, radial growth of morphology, obvious crystal edges and decreased defects. Black-Right-Pointing-Pointer The physicochemical properties of samples were studied by analysis of its structure, morphology, surface and optical properties. Black-Right-Pointing-Pointer This study represented a multistep mechanism based on [Zn(OH){sub 4}]{sup 2-} growth units about formation such urchin-like structure. -- Abstract: The urchin-like ZnO microcrystals with high crystallinity decomposed from [Zn(OH){sub 4}]{sup 2-}more » directly were obtained via a hydrothermal method. The morphology, particle size, crystalline structure and fluorescence of the as-prepared ZnO were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD) and photoluminescence (PL) analyses. The results demonstrated that the urchin-like ZnO crystals with wurtzite structure had a narrow distribution in size, which could be adjusted in the range of 30-80 {mu}m by varying reaction time. Broad visible light emission peak was also observed in the PL spectra of the synthesized ZnO products. A multistep growth process about how to form such a structure was proposed.« less
Chest wall segmentation in automated 3D breast ultrasound scans.
Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico
2013-12-01
In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.
Methods for Geometric Data Validation of 3d City Models
NASA Astrophysics Data System (ADS)
Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2015-12-01
Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
NASA Astrophysics Data System (ADS)
Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing
2009-07-01
Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Collapsing lattice animals and lattice trees in two dimensions
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Grassberger, Peter
2005-06-01
We present high statistics simulations of weighted lattice bond animals and lattice trees on the square lattice, with fugacities for each non-bonded contact and for each bond between two neighbouring monomers. The simulations are performed using a newly developed sequential sampling method with resampling, very similar to the pruned-enriched Rosenbluth method (PERM) used for linear chain polymers. We determine with high precision the line of second-order transitions from an extended to a collapsed phase in the resulting two-dimensional phase diagram. This line includes critical bond percolation as a multicritical point, and we verify that this point divides the line into different universality classes. One of them corresponds to the collapse driven by contacts and includes the collapse of (weakly embeddable) trees. There is some evidence that the other is subdivided again into two parts with different universality classes. One of these (at the far side from collapsing trees) is bond driven and is represented by the Derrida-Herrmann model of animals having bonds only (no contacts). Between the critical percolation point and this bond-driven collapse seems to be an intermediate regime, whose other end point is a multicritical point P* where a transition line between two collapsed phases (one bond driven and the other contact driven) sparks off. This point P* seems to be attractive (in the renormalization group sense) from the side of the intermediate regime, so there are four universality classes on the transition line (collapsing trees, critical percolation, intermediate regime, and Derrida-Herrmann). We obtain very precise estimates for all critical exponents for collapsing trees. It is already harder to estimate the critical exponents for the intermediate regime. Finally, it is very difficult to obtain with our method good estimates of the critical parameters of the Derrida-Herrmann universality class. As regards the bond-driven to contact-driven transition in the collapsed phase, we have some evidence for its existence and rough location, but no precise estimates of critical exponents.
A new approach in the derivation of relativistic variation of mass with speed
NASA Astrophysics Data System (ADS)
Dikshit, Biswaranjan
2015-05-01
The expression for relativistic variation of mass with speed has been derived in the literature in the following ways: by considering the principles of electrodynamics; by considering elastic collision between two identical particles in which momentum and energy are conserved; or by more advanced methods such as the Lagrangian approach. However, in this paper, the same expression is derived simply by applying the law of conservation of momentum to the motion of a single particle that is subjected to a force (which may be non-electromagnetic) at some point in its trajectory. The advantage of this method is that, in addition to being simple, we can observe how the mass is increased from rest mass to relativistic mass when the speed is changed from 0 to a value of v, as only a single particle is involved in the analysis. This is in contrast to the two particles considered in most text books, in which one represents rest mass and the other represents relativistic mass.
Noise suppression due to annulus shaping of conventional coaxial nozzle
NASA Technical Reports Server (NTRS)
Vonglahn, U.; Goodykoontz, J.
1980-01-01
A method which shows that increasing the annulus width of a conventional coaxial nozzle with constant bypass velocity will lower the noise level is described. The method entails modifying a concentric coaxial nozzle to provide an eccentric outer stream annulus while maintaining approximately the same through flow as that for the original concentric bypass nozzle. Acoustical tests to determine the noise generating characteristics of the nozzle over a range of flow conditions are described. The tests involved sequentially analyzing the noise signals and digitally recording the 1/3 octave band sound pressure levels. The measurements were made in a plane passing through the minimum and maximum annulus width points, as well as at 90 degrees in this plane, by rotating the outer nozzle about its axis. Representative measured spectral data in the flyover plane for the concentric nozzle obtained at model scale are discussed. Representative spectra for several engine cycles are presented for both the eccentric and concentric nozzles at engine size.
NASA Astrophysics Data System (ADS)
Ovidiu, Avram; Rusu, Emil; Maftei, Raluca-Mihaela; Ulmeanu, Antonio; Scutelnicu, Ioan; Filipciuc, Constantina; Tudor, Elena
2017-12-01
Electrometry is most frequently applied geophysical method to examine dynamical phenomena related to the massive salt presence due to resistivity contrasts between salt, salt breccia and geological covering formations. On the vertical resistivity sections obtained with VES devices these three compartments are clearly differentiates by high resistivity for the massive salt, very low for salt breccia and variable for geological covering formations. When the land surface is inclined, shallow formations are moving gravitationally on the salt back, producing a landslide. Landslide monitoring involves repeated periodically measurements of geoelectrical profiles into a grid covering the slippery surface, in the same conditions (climate, electrodes position, instrument and measurement parameters). The purpose of monitoring landslides in Slanic Prahova area, was to detect the changes in resistivity distribution profiles to superior part of subsoil measured in 2014 and 2015. Measurement grid include several representative cross sections in susceptibility to landslides point of view. The results are graphically represented by changing the distribution of topography and resistivity differences between the two sets of geophysical measurements.
Real-time Probabilistic Covariance Tracking with Efficient Model Update
2012-05-01
NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed...feature points inside a given rectangular region R of F . The region R is represented by the d×d covariance matrix of the feature points C = 1 N − 1 N...i=1 (fi − µ)(fi − µ)T , where N is the number of pixels in the region R and µ is the mean of the feature points. The element (i, j) of C represents
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
Human Population Decline in North America during the Younger Dryas
NASA Astrophysics Data System (ADS)
Anderson, D. G.; Goodyear, A. C.; Stafford, T. W., Jr.; Kennett, J.; West, A.
2009-12-01
There is ongoing debate about a possible human population decline or contraction at the onset of the Younger Dryas (YD) at 12.9 ka. We used two methods to test whether the YD affected human population levels: (1) frequency analyses of Paleoindian projectile points, and (2) summed probability analyses of radiocarbon (14C) dates. The results suggest that a significant decline or reorganization of human populations occurred at 12.9 ka, continued through the initial centuries of the YD chronozone, then rebounded by the end of the YD. FREQUENCY ANALYSES: This method employed projectile point data from the Paleoindian Database of the Americas (PIDBA, http://pidba.utk.edu). We tallied diagnostic projectile points and obtained larger totals for Clovis points than for immediately post-Clovis points, which share an instrument-assisted fluting technique, typically using pressure or indirect percussion. Gainey, Vail, Debert, Redstone, and Cumberland point-styles utilized this method and are comparable to the Folsom style. For the SE U.S., the ratio of Clovis points (n=1993) to post-Clovis points (n=947) reveals a point decline of 52%. For the Great Plains, a comparison of Clovis and fluted points (n=4020) to Folsom points (n=2527) shows a point decline of 37%, which may translate into a population contraction of similar magnitude. In addition, eight major Clovis lithic quarry sites in the SE U.S. exhibit little to no evidence for immediate post-Clovis occupations, implying a major population decline. SUMMED PROBABILITIES: This method involved calibrating relevant 14C dates and combining the probabilities, after which major peaks and troughs in the trends are assumed to reflect changes in human demographics. Using 14C dates from Buchanan et al. (2008), we analyzed multiple regions, including the Southeast and Great Plains. Contrary to Buchanan et al., we found an abrupt, statistically significant decline at 12.9 ka, followed 200 to 900 years later by a rebound in the number of dates. The decline at the YD onset was more than 50%, similar in magnitude to the decline in Clovis-Folsom point ratios. While calibration and sampling factors may affect the trends, this abrupt decline is large and requires explanation. SUMMARY: Even though correlation does not equate with causation, the coeval YD decline in both points and 14C dates appears linked to significant changes in climate and biota, as represented by the megafaunal extinction. While the causes of the YD remain controversial, a human population decline appears to have occurred, at least across parts of North America. Furthermore, the YD onset is associated with the abrupt replacement of Clovis by regional or subregional scale cultural traditions, potentially reflecting decreased range mobility and increased population isolation. Projectile point distributions and summed probability analyses, we argue, are potentially useful approaches for exploring demographic changes at regional scales.
A fast simulation method for radiation maps using interpolation in a virtual environment.
Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun
2018-05-10
In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.
NASA Astrophysics Data System (ADS)
Dorninger, P.; Koma, Z.; Székely, B.
2012-04-01
In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined either automatically (i.e. estimated of the given data) or manually (i.e. supervised parameter estimation). Additionally, the result might be influenced if data processing is performed locally (i.e. using tiles) or globally. Local processing of the data has the advantages of generally performing faster, having less hardware requirements, and enabling the determination of more detailed information. By contrast, especially in geomorphological interpretation, a global data processing enables determining large scale relations within the dataset analyzed. We investigated the influence of control parameter settings on the geomorphological interpretation on airborne and terrestrial laser scanning data sets of the landslide at Doren (Vorarlberg, Austria), on airborne laser scanning data of the western cordilleras of the central Andes, and on HRSC terrain data of the Mars surface. Topics discussed are the suitability of automated versus manual determination of control parameters, the influence of the definition of the area of interest (local versus global application) as well as computational performance.
Segmentation of suspicious objects in an x-ray image using automated region filling approach
NASA Astrophysics Data System (ADS)
Fu, Kenneth; Guest, Clark; Das, Pankaj
2009-08-01
To accommodate the flow of commerce, cargo inspection systems require a high probability of detection and low false alarm rate while still maintaining a minimum scan speed. Since objects of interest (high atomic-number metals) will often be heavily shielded to avoid detection, any detection algorithm must be able to identify such objects despite the shielding. Since pixels of a shielded object have a greater opacity than the shielding, we use a clustering method to classify objects in the image by pixel intensity levels. We then look within each intensity level region for sub-clusters of pixels with greater opacity than the surrounding region. A region containing an object has an enclosed-contour region (a hole) inside of it. We apply a region filling technique to fill in the hole, which represents a shielded object of potential interest. One method for region filling is seed-growing, which puts a "seed" starting point in the hole area and uses a selected structural element to fill out that region. However, automatic seed point selection is a hard problem; it requires additional information to decide if a pixel is within an enclosed region. Here, we propose a simple, robust method for region filling that avoids the problem of seed point selection. In our approach, we calculate the gradient Gx and Gy at each pixel in a binary image, and fill in 1s between a pair of x1 Gx(x1,y)=-1 and x2 Gx(x2,y)=1, and do the same thing in y-direction. The intersection of the two results will be filled region. We give a detailed discussion of our algorithm, discuss the strengths this method has over other methods, and show results of using our method.
NASA Technical Reports Server (NTRS)
Kwon, Ryun Young; Chae, Jongchul; Davila, Joseph M.; Zhang, Jie; Moon, Yong-Jae; Poomvises, Watanachak; Jones, Shaela I.
2012-01-01
We unveil the three-dimensional structure of quiet-Sun EUV bright points and their temporal evolution by applying a triangulation method to time series of images taken by SECCHI/EUVI on board the STEREO twin spacecraft. For this study we examine the heights and lengths as the components of the three-dimensional structure of EUV bright points and their temporal evolutions. Among them we present three bright points which show three distinct changes in the height and length: decreasing, increasing, and steady. We show that the three distinct changes are consistent with the motions (converging, diverging, and shearing, respectively) of their photospheric magnetic flux concentrations. Both growth and shrinkage of the magnetic fluxes occur during their lifetimes and they are dominant in the initial and later phases, respectively. They are all multi-temperature loop systems which have hot loops (approx. 10(exp 6.2) K) overlying cooler ones (approx 10(exp 6.0) K) with cool legs (approx 10(exp 4.9) K) during their whole evolutionary histories. Our results imply that the multi-thermal loop system is a general character of EUV bright points. We conclude that EUV bright points are flaring loops formed by magnetic reconnection and their geometry may represent the reconnected magnetic field lines rather than the separator field lines.
NASA Astrophysics Data System (ADS)
Hittmeir, Sabine; Philipp, Anne; Seibert, Petra
2017-04-01
In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.
Surface representations of two- and three-dimensional fluid flow topology
NASA Technical Reports Server (NTRS)
Helman, James L.; Hesselink, Lambertus
1990-01-01
We discuss our work using critical point analysis to generate representations of the vector field topology of numerical flow data sets. Critical points are located and characterized in a two-dimensional domain, which may be either a two-dimensional flow field or the tangential velocity field near a three-dimensional body. Tangent curves are then integrated out along the principal directions of certain classes of critical points. The points and curves are linked to form a skeleton representing the two-dimensional vector field topology. When generated from the tangential velocity field near a body in a three-dimensional flow, the skeleton includes the critical points and curves which provide a basis for analyzing the three-dimensional structure of the flow separation. The points along the separation curves in the skeleton are used to start tangent curve integrations to generate surfaces representing the topology of the associated flow separations.
Workshop on Engineering Turbulence Modeling
NASA Technical Reports Server (NTRS)
Povinelli, Louis A. (Editor); Liou, W. W. (Editor); Shabbir, A. (Editor); Shih, T.-H. (Editor)
1992-01-01
Discussed here is the future direction of various levels of engineering turbulence modeling related to computational fluid dynamics (CFD) computations for propulsion. For each level of computation, there are a few turbulence models which represent the state-of-the-art for that level. However, it is important to know their capabilities as well as their deficiencies in order to help engineers select and implement the appropriate models in their real world engineering calculations. This will also help turbulence modelers perceive the future directions for improving turbulence models. The focus is on one-point closure models (i.e., from algebraic models to higher order moment closure schemes and partial differential equation methods) which can be applied to CFD computations. However, other schemes helpful in developing one-point closure models, are also discussed.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
The Structure of Reclaiming Warehouse of Minerals at Open-Cut Mines with the Use Combined Transport
NASA Astrophysics Data System (ADS)
Ikonnikov, D. A.; Kovshov, S. V.
2017-07-01
In the article performed an analysis of ore reclaiming and overloading point characteristics at modern opencast mines. Ore reclaiming represents the most effective way of stability support of power-intensive and expensive technological dressing process, and, consequently, of maintenance of the optimal production and set-up parameters of extraction and quality of finished product. The paper proposed the construction of the warehouse describing the technology of its creation. Equipment used for the warehouse described in detail. All stages of development and operation was shown. Advantages and disadvantages of using mechanical shovel excavator and hydraulic excavator “backdigger” as a reloading and reclaiming equipment was compared. Ore reclaiming and overloading point construction at cyclical and continuous method of mining using a hydraulic excavator “backdigger” was proposed.
Unidirectional invisibility induced by parity-time symmetric circuit
NASA Astrophysics Data System (ADS)
Lv, Bo; Fu, Jiahui; Wu, Bian; Li, Rujiang; Zeng, Qingsheng; Yin, Xinhua; Wu, Qun; Gao, Lei; Chen, Wan; Wang, Zhefei; Liang, Zhiming; Li, Ao; Ma, Ruyu
2017-01-01
Parity-time (PT) symmetric structures present the unidirectional invisibility at the spontaneous PT-symmetry breaking point. In this paper, we propose a PT-symmetric circuit consisting of a resistor and a microwave tunnel diode (TD) which represent the attenuation and amplification, respectively. Based on the scattering matrix method, the circuit can exhibit an ideal unidirectional performance at the spontaneous PT-symmetry breaking point by tuning the transmission lines between the lumped elements. Additionally, the resistance of the reactance component can alter the bandwidth of the unidirectional invisibility flexibly. Furthermore, the electromagnetic simulation for the proposed circuit validates the unidirectional invisibility and the synchronization with the input energy well. Our work not only provides an unidirectional invisible circuit based on PT-symmetry, but also proposes a potential solution for the extremely selective filter or cloaking applications.
Exploring the use of memory colors for image enhancement
NASA Astrophysics Data System (ADS)
Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly
2014-02-01
Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.
PUBLIC EXPOSURE TO MULTIPLE RF SOURCES IN GHANA.
Deatanyah, P; Abavare, E K K; Menyeh, A; Amoako, J K
2018-03-16
This paper describes an effort to respond to the suggestion in World Health Organization (WHO) research agenda to better quantify potential exposure levels from a range of radiofrequency (RF) sources at 200 public access locations in Ghana. Wide-band measurements were performed-with a spectrum analyser and a log-periodic antenna using three-point spatial averaging method. The overall results represented a maximum of 0.19% of the ICNIRP reference levels for public exposure. These results were generally lower than found in some previous but were 58% (2.0 dB) greater, than found in similar work conducted in the USA. Major contributing sources of RF fields were identified to be FM broadcast and mobile base station sites. Three locations with the greatest measured RF fields could represent potential areas for epidemiological studies.
Code of Federal Regulations, 2010 CFR
2010-01-01
... representative to the service location (at other than a specified duty point) is more than 25 miles from an FGIS... representative will be assessed from the FGIS office to the service point and return. When commercial modes of transportation (e.g., airplanes) are required, the actual expense incurred for the round-trip travel will be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...
Code of Federal Regulations, 2012 CFR
2012-01-01
... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...
Code of Federal Regulations, 2014 CFR
2014-01-01
... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...
Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)
NASA Astrophysics Data System (ADS)
Aksoy, A.; Yenilmez, F.; Duzgun, S.
2013-12-01
Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
21 CFR 111.80 - What representative samples must you collect?
Code of Federal Regulations, 2010 CFR
2010-04-01
... Process Control System § 111.80 What representative samples must you collect? The representative samples... unique lot within each unique shipment); (b) Representative samples of in-process materials for each manufactured batch at points, steps, or stages, in the manufacturing process as specified in the master...
Schmitt, J Eric; Scanlon, Mary H; Servaes, Sabah; Levin, Dayna; Cook, Tessa S
2015-10-01
The advent of the ACGME's Next Accreditation System represents a significant new challenge for residencies and fellowships, owing to its requirements for more complex and detailed information. We developed a system of online assessment tools to provide comprehensive coverage of the twelve ACGME Milestones and digitized them using freely available cloud-based productivity tools. These tools include a combination of point-of-care procedural assessments, electronic quizzes, online modules, and other data entry forms. Using free statistical analytic tools, we also developed an automated system for management, processing, and data reporting. After one year of use, our Milestones project has resulted in the submission of over 20,000 individual data points. The use of automated statistical methods to generate resident-specific profiles has allowed for dynamic reports of individual residents' progress. These profiles both summarize data and also allow program directors access to more granular information as needed. Informatics-driven strategies for data assessment and processing represent feasible solutions to Milestones assessment and analysis, reducing the potential administrative burden for program directors, residents, and staff. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Kirchofer, Abby; Becker, Austin; Brandt, Adam; Wilcox, Jennifer
2013-07-02
The availability of industrial alkalinity sources is investigated to determine their potential for the simultaneous capture and sequestration of CO2 from point-source emissions in the United States. Industrial alkalinity sources investigated include fly ash, cement kiln dust, and iron and steel slag. Their feasibility for mineral carbonation is determined by their relative abundance for CO2 reactivity and their proximity to point-source CO2 emissions. In addition, the available aggregate markets are investigated as possible sinks for mineral carbonation products. We show that in the U.S., industrial alkaline byproducts have the potential to mitigate approximately 7.6 Mt CO2/yr, of which 7.0 Mt CO2/yr are CO2 captured through mineral carbonation and 0.6 Mt CO2/yr are CO2 emissions avoided through reuse as synthetic aggregate (replacing sand and gravel). The emission reductions represent a small share (i.e., 0.1%) of total U.S. CO2 emissions; however, industrial byproducts may represent comparatively low-cost methods for the advancement of mineral carbonation technologies, which may be extended to more abundant yet expensive natural alkalinity sources.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
On singular and highly oscillatory properties of the Green function for ship motions
NASA Astrophysics Data System (ADS)
Chen, Xiao-Bo; Xiong Wu, Guo
2001-10-01
The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.
NASA Technical Reports Server (NTRS)
Bachmann, Klaus J.
1995-01-01
A workshop on the control of stoichiometry in epitaxial semiconductor structures was held on August 21-26, 1995 in the hotel Stutenhaus at Vesser in Germany. The secluded location of the workshop in the forest of Thuringia and its informal style stimulated extensive private discussions among the participants and promoted new contacts between young scientists from Eastern and Western Europe and the USA. Topics addressed by the presentations were interactions of precursors to heteroepitaxy and doping with the substrate surface, the control of interfacial properties under the conditions of heteroepitaxy for selected materials systems, methods of characterization of interfaces and native point defects in semiconductor heterostructures and an in depth evaluation of the present status of the control and characterization of the point defect chemistry for one specific semiconductor (ZnGeP2), including studies of both heterostructures and bulk single crystals. The selected examples of presentations and comments given here represent individual choices - made by the author to highlight major points of the discussions.
Classification and identification of reading and math disabilities: the special case of comorbidity.
Branum-Martin, Lee; Fletcher, Jack M; Stuebing, Karla K
2013-01-01
Much of learning disabilities research relies on categorical classification frameworks that use psychometric tests and cut points to identify children with reading or math difficulties. However, there is increasing evidence that the attributes of reading and math learning disabilities are dimensional, representing correlated continua of severity. We discuss issues related to categorical and dimensional approaches to reading and math disabilities, and their comorbid associations, highlighting problems with the use of cut points and correlated assessments. Two simulations are provided in which the correlational structure of a set of cognitive and achievement data are simulated from a single population with no categorical structures. The simulations produce profiles remarkably similar to reported profile differences, suggesting that the patterns are a product of the cut point and the correlational structure of the data. If dimensional approaches better fit the attributes of learning disability, new conceptualizations and better methods to identification and intervention may emerge, especially for comorbid associations of reading and math difficulties.
Computed potential energy surfaces for chemical reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1988-01-01
The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.
Quantifying intervertebral disc mechanics: a new definition of the neutral zone
2011-01-01
Background The neutral zone (NZ) is the range over which a spinal motion segment (SMS) moves with minimal resistance. Clear as this may seem, the various methods to quantify NZ described in the literature depend on rather arbitrary criteria. Here we present a stricter, more objective definition. Methods To mathematically represent load-deflection of a SMS, the asymmetric curve was fitted by a summed sigmoid function. The first derivative of this curve represents the SMS compliance and the region with the highest compliance (minimal stiffness) is the NZ. To determine the boundaries of this region, the inflection points of compliance can be used as unique points. These are defined by the maximum and the minimum in the second derivative of the fitted curve, respectively. The merits of the model were investigated experimentally: eight porcine lumbar SMS's were bent in flexion-extension, before and after seven hours of axial compression. Results The summed sigmoid function provided an excellent fit to the measured data (r2 > 0.976). The NZ by the new definition was on average 2.4 (range 0.82-7.4) times the NZ as determined by the more commonly used angulation difference at zero loading. Interestingly, NZ consistently and significantly decreased after seven hours of axial compression when determined by the new definition. On the other hand, NZ increased when defined as angulation difference, probably reflecting the increase of hysteresis. The methods thus address different aspects of the load-deflection curve. Conclusions A strict mathematical definition of the NZ is proposed, based on the compliance of the SMS. This operational definition is objective, conceptually correct, and does not depend on arbitrarily chosen criteria. PMID:21299900
Segmentation of touching handwritten Japanese characters using the graph theory method
NASA Astrophysics Data System (ADS)
Suwa, Misako
2000-12-01
Projection analysis methods have been widely used to segment Japanese character strings. However, if adjacent characters have overhanging strokes or a touching point doesn't correspond to the histogram minimum, the methods are prone to result in errors. In contrast, non-projection analysis methods being proposed for use on numerals or alphabet characters cannot be simply applied for Japanese characters because of the differences in the structure of the characters. Based on the oversegmenting strategy, a new pre-segmentation method is presented in this paper: touching patterns are represented as graphs and touching strokes are regarded as the elements of proper edge cutsets. By using the graph theoretical technique, the cutset martrix is calculated. Then, by applying pruning rules, potential touching strokes are determined and the patterns are over segmented. Moreover, this algorithm was confirmed to be valid for touching patterns with overhanging strokes and doubly connected patterns in simulations.
NASA Astrophysics Data System (ADS)
Gaona Garcia, J.; Lewandowski, J.; Bellin, A.
2017-12-01
Groundwater-stream water interactions in rivers determine water balances, but also chemical and biological processes in the streambed at different spatial and temporal scales. Due to the difficult identification and quantification of gaining, neutral and losing conditions, it is necessary to combine techniques with complementary capabilities and scale ranges. We applied this concept to a study site at the River Schlaube, East Brandenburg-Germany, a sand bed stream with intense sediment heterogeneity and complex environmental conditions. In our approach, point techniques such as temperature profiles of the streambed together with vertical hydraulic gradients provide data for the estimation of fluxes between groundwater and surface water with the numerical model 1DTempPro. On behalf of distributed techniques, fiber optic distributed temperature sensing identifies the spatial patterns of neutral, down- and up-welling areas by analysis of the changes in the thermal patterns at the streambed interface under certain flow. The study finally links point and surface temperatures to provide a method for upscaling of fluxes. Point techniques provide point flux estimates with essential depth detail to infer streambed structures while the results hardly represent the spatial distribution of fluxes caused by the heterogeneity of streambed properties. Fiber optics proved capable of providing spatial thermal patterns with enough resolution to observe distinct hyporheic thermal footprints at multiple scales. The relation of thermal footprint patterns and temporal behavior with flux results from point techniques enabled the use of methods for spatial flux estimates. The lack of detailed information of the physical driver's spatial distribution restricts the spatial flux estimation to the application of the T-proxy method, whose highly uncertain results mainly provide coarse spatial flux estimates. The study concludes that the upscaling of groundwater-stream water interactions using thermal measurements with combined point and distributed techniques requires the integration of physical drivers because of the heterogeneity of the flux patterns. Combined experimental and modeling approaches may help to obtain more reliable understanding of groundwater-surface water interactions at multiple scales.
Pedestrian Pathfinding in Urban Environments: Preliminary Results
NASA Astrophysics Data System (ADS)
López-Pazos, G.; Balado, J.; Díaz-Vilariño, L.; Arias, P.; Scaioni, M.
2017-12-01
With the rise of urban population, many initiatives are focused upon the smart city concept, in which mobility of citizens arises as one of the main components. Updated and detailed spatial information of outdoor environments is needed to accurate path planning for pedestrians, especially for people with reduced mobility, in which physical barriers should be considered. This work presents a methodology to use point clouds to direct path planning. The starting point is a classified point cloud in which ground elements have been previously classified as roads, sidewalks, crosswalks, curbs and stairs. The remaining points compose the obstacle class. The methodology starts by individualizing ground elements and simplifying them into representative points, which are used as nodes in the graph creation. The region of influence of obstacles is used to refine the graph. Edges of the graph are weighted according to distance between nodes and according to their accessibility for wheelchairs. As a result, we obtain a very accurate graph representing the as-built environment. The methodology has been tested in a couple of real case studies and Dijkstra algorithm was used to pathfinding. The resulting paths represent the optimal according to motor skills and safety.
2014-10-01
and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents a best fit linear regression...parameters: a) Hseg, b) QL, c) γ0, and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents...concentration x0 for the nanocrystalline Fe–Zr system. The white square data point shows the location of the experimental data used for fitting the
Scharfenberger, Christian; Wong, Alexander; Clausi, David A
2015-01-01
We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.
Frantz, Terrill L
2012-01-01
This paper introduces the contemporary perspectives and techniques of social network analysis (SNA) and agent-based modeling (ABM) and advocates applying them to advance various aspects of complementary and alternative medicine (CAM). SNA and ABM are invaluable methods for representing, analyzing and projecting complex, relational, social phenomena; they provide both an insightful vantage point and a set of analytic tools that can be useful in a wide range of contexts. Applying these methods in the CAM context can aid the ongoing advances in the CAM field, in both its scientific aspects and in developing broader acceptance in associated stakeholder communities. Copyright © 2012 S. Karger AG, Basel.
Profit intensity and cases of non-compliance with the law of demand/supply
NASA Astrophysics Data System (ADS)
Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek
2017-05-01
We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.
Numerical study of drop spreading on a flat surface
NASA Astrophysics Data System (ADS)
Wang, Sheng; Desjardins, Olivier
2017-11-01
In this talk, we perform a numerical study of a droplet on a flat surface with special emphasis on capturing the spreading dynamics. The computational methodology employed is tailored for simulating large-scale two-phase flows within complex geometries. It combines a conservative level-set method to capture the liquid-gas interface, a conservative immersed boundary method to represent the solid-fluid interface, and a sub-grid curvature model at the triple-point to implicitly impose the contact angle of the liquid-gas interface. The performance of the approach is assessed in the inertial droplet spreading regime, the viscous spreading regime of high viscosity drops, and with the capillary oscillation of low viscosity droplets.
Halovic, Shaun; Kroos, Christian
2017-12-01
This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into H t rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides H t rates separately for male, female and ambiguously gendered walkers.
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Infrared images target detection based on background modeling in the discrete cosine domain
NASA Astrophysics Data System (ADS)
Ye, Han; Pei, Jihong
2018-02-01
Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.
STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS
2018-02-15
23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
Dynamic laser speckle analyzed considering inhomogeneities in the biological sample
NASA Astrophysics Data System (ADS)
Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico
2017-04-01
Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.
NASA Astrophysics Data System (ADS)
Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.
Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.
Number of perceptually distinct surface colors in natural scenes.
Marín-Franch, Iván; Foster, David H
2010-09-30
The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.
Holes in the ocean: Filling voids in bathymetric lidar data
NASA Astrophysics Data System (ADS)
Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie
2011-04-01
The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.
On Multi-Dimensional Unstructured Mesh Adaption
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1999-01-01
Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Cardiff, M.; Lecoq, N.; Jourde, H.
2018-04-01
In a karstic field, the flow paths are very complex as they globally follow the conduit network. The responses generated from an investigation in this type of aquifer can be spatially highly variable. Therefore, the aim of the investigation in this case is to define a degree of connectivity between points of the field, in order to understand these flow paths. Harmonic pumping tests represent a possible investigation method for characterizing the subsurface flow of groundwater. They have several advantages compared to a constant-rate pumping (more signal possibilities, ease of extracting the signal in the responses and possibility of closed loop investigation). We show in this work that interpreting the responses from a harmonic pumping test is very useful for delineating a degree of connectivity between measurement points. We have firstly studied the amplitude and phase offset of responses from a harmonic pumping test in a theoretical synthetic modeling case in order to define a qualitative interpretation method in the time and frequency domains. Three different type of responses have been separated: a conduit connectivity response, a matrix connectivity, and a dual connectivity (response of a point in the matrix, but close to a conduit). We have then applied this method to measured responses at a field research site. Our interpretation method permits a quick and easy reconstruction of the main flow paths, and the whole set of field responses appear to give a similar range of responses to those seen in the theoretical synthetic case.
Protein space: a natural method for realizing the nature of protein universe.
Yu, Chenglong; Deng, Mo; Cheng, Shiu-Yuen; Yau, Shek-Chung; He, Rong L; Yau, Stephen S-T
2013-02-07
Current methods cannot tell us what the nature of the protein universe is concretely. They are based on different models of amino acid substitution and multiple sequence alignment which is an NP-hard problem and requires manual intervention. Protein structural analysis also gives a direction for mapping the protein universe. Unfortunately, now only a minuscule fraction of proteins' 3-dimensional structures are known. Furthermore, the phylogenetic tree representations are not unique for any existing tree construction methods. Here we develop a novel method to realize the nature of protein universe. We show the protein universe can be realized as a protein space in 60-dimensional Euclidean space using a distance based on a normalized distribution of amino acids. Every protein is in one-to-one correspondence with a point in protein space, where proteins with similar properties stay close together. Thus the distance between two points in protein space represents the biological distance of the corresponding two proteins. We also propose a natural graphical representation for inferring phylogenies. The representation is natural and unique based on the biological distances of proteins in protein space. This will solve the fundamental question of how proteins are distributed in the protein universe. Copyright © 2012 Elsevier Ltd. All rights reserved.
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
Lidar-based individual tree species classification using convolutional neural network
NASA Astrophysics Data System (ADS)
Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi
2017-06-01
Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.
FIR signature verification system characterizing dynamics of handwriting features
NASA Astrophysics Data System (ADS)
Thumwarin, Pitak; Pernwong, Jitawat; Matsuura, Takenobu
2013-12-01
This paper proposes an online signature verification method based on the finite impulse response (FIR) system characterizing time-frequency characteristics of dynamic handwriting features. First, the barycenter determined from both the center point of signature and two adjacent pen-point positions in the signing process, instead of one pen-point position, is used to reduce the fluctuation of handwriting motion. In this paper, among the available dynamic handwriting features, motion pressure and area pressure are employed to investigate handwriting behavior. Thus, the stable dynamic handwriting features can be described by the relation of the time-frequency characteristics of the dynamic handwriting features. In this study, the aforesaid relation can be represented by the FIR system with the wavelet coefficients of the dynamic handwriting features as both input and output of the system. The impulse response of the FIR system is used as the individual feature for a particular signature. In short, the signature can be verified by evaluating the difference between the impulse responses of the FIR systems for a reference signature and the signature to be verified. The signature verification experiments in this paper were conducted using the SUBCORPUS MCYT-100 signature database consisting of 5,000 signatures from 100 signers. The proposed method yielded equal error rate (EER) of 3.21% on skilled forgeries.
NASA Astrophysics Data System (ADS)
Frazer, Gordon J.; Anderson, Stuart J.
1997-10-01
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds
NASA Astrophysics Data System (ADS)
Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang
2017-04-01
3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.
A calibration protocol for population-specific accelerometer cut-points in children.
Mackintosh, Kelly A; Fairclough, Stuart J; Stratton, Gareth; Ridgers, Nicola D
2012-01-01
To test a field-based protocol using intermittent activities representative of children's physical activity behaviours, to generate behaviourally valid, population-specific accelerometer cut-points for sedentary behaviour, moderate, and vigorous physical activity. Twenty-eight children (46% boys) aged 10-11 years wore a hip-mounted uniaxial GT1M ActiGraph and engaged in 6 activities representative of children's play. A validated direct observation protocol was used as the criterion measure of physical activity. Receiver Operating Characteristics (ROC) curve analyses were conducted with four semi-structured activities to determine the accelerometer cut-points. To examine classification differences, cut-points were cross-validated with free-play and DVD viewing activities. Cut-points of ≤ 372, >2160 and >4806 counts • min(-1) representing sedentary, moderate and vigorous intensity thresholds, respectively, provided the optimal balance between the related needs for sensitivity (accurately detecting activity) and specificity (limiting misclassification of the activity). Cross-validation data demonstrated that these values yielded the best overall kappa scores (0.97; 0.71; 0.62), and a high classification agreement (98.6%; 89.0%; 87.2%), respectively. Specificity values of 96-97% showed that the developed cut-points accurately detected physical activity, and sensitivity values (89-99%) indicated that minutes of activity were seldom incorrectly classified as inactivity. The development of an inexpensive and replicable field-based protocol to generate behaviourally valid and population-specific accelerometer cut-points may improve the classification of physical activity levels in children, which could enhance subsequent intervention and observational studies.
Reconstruction and analysis of hybrid composite shells using meshless methods
NASA Astrophysics Data System (ADS)
Bernardo, G. M. S.; Loja, M. A. R.
2017-06-01
The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.
NASA Astrophysics Data System (ADS)
Divine, D.; Godtliebsen, F.; Rue, H.
2012-04-01
Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
Test problems for inviscid transonic flow
NASA Technical Reports Server (NTRS)
Carlson, L. A.
1979-01-01
Solving of test problems with the TRANDES program is discussed. This method utilizes the full, inviscid, perturbation potential flow equation in a Cartesian grid system that is stretched to infinity. This equation is represented by a nonconservative system of finite difference equations that includes at supersonic points a rotated difference scheme and is solved by column relaxation. The solution usually starts from a zero perturbation potential on a very coarse grid (typically 13 by 7) followed by several grid halvings until a final solution is obtained on a fine grid (97 by 49).
On laminar and turbulent friction
NASA Technical Reports Server (NTRS)
Von Karman, TH
1946-01-01
Report deals, first with the theory of the laminar friction flow, where the basic concepts of Prandtl's boundary layer theory are represented from mathematical and physical points of view, and a method is indicated by means of which even more complicated cases can be treated with simple mathematical means, at least approximately. An attempt is also made to secure a basis for the computation of the turbulent friction by means of formulas through which the empirical laws of the turbulent pipe resistance can be applied to other problems on friction drag. (author)
Band structure and unconventional electronic topology of CoSi
NASA Astrophysics Data System (ADS)
Pshenay-Severin, D. A.; Ivanov, Y. V.; Burkov, A. A.; Burkov, A. T.
2018-04-01
Semimetals with certain crystal symmetries may possess unusual electronic structure topology, distinct from that of the conventional Weyl and Dirac semimetals. Characteristic property of these materials is the existence of band-touching points with multiple (higher than two-fold) degeneracy and nonzero Chern number. CoSi is a representative of this group of materials exhibiting the so-called ‘new fermions’. We report on an ab initio calculation of the electronic structure of CoSi using density functional methods, taking into account the spin-orbit interactions. The linearized \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Philip LaRoche
This report is a summary of and commentary on (a) the seven lectures that C. S. Peirce presented in 1903 on pragmatism and (b) a commentary by P. A. Turrisi, both of which are included in Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard Lectures on Pragmatism, edited by Turrisi [13]. Peirce is known as the founder of the philosophy of pragmatism and these lectures, given near the end of his life, represent his mature thoughts on the philosophy. Peirce's decomposition of thinking into abduction, deduction, and induction is among the important points in the lectures.
Education, leadership and partnerships: nursing potential for Universal Health Coverage
Mendes, Isabel Amélia Costa; Ventura, Carla Aparecida Arena; Trevizan, Maria Auxiliadora; Marchi-Alves, Leila Maria; de Souza-Junior, Valtuir Duarte
2016-01-01
Objective: to discuss possibilities of nursing contribution for universal health coverage. Method: a qualitative study, performed by means of document analysis of the World Health Organization publications highlighting Nursing and Midwifery within universal health coverage. Results: documents published by nursing and midwifery leaders point to the need for coordinated and integrated actions in education, leadership and partnership development. Final Considerations: this article represents a call for nurses, in order to foster reflection and understanding of the relevance of their work on the consolidation of the principles of universal health coverage. PMID:26959333
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Study on the initial value for the exterior orientation of the mobile version
NASA Astrophysics Data System (ADS)
Yu, Zhi-jing; Li, Shi-liang
2011-10-01
Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.
Separated flows near the nose of a body of revolution
NASA Technical Reports Server (NTRS)
Lin, S. P.
1986-01-01
The solution of the Navier-Stokes equations for the problem of cross-flow separataion about a deforming cylinder was achieved by iteration. It was shown that the separation starts at the rear stagnation point and the point of primary separation moves upstram along the cylinder surface. A general method of linear stability analysis for nonparallel external flows was constructed, which consists of representing the eigenfunctions with complete orthogonal sets and forms characteristic equations with the Galerkin method. The method was applied to the Kovasznay flow which is an exact solution of the Navier-Stokes equation. The results show that when the critical parameter is exceeded, there are only a few isolated unstable eigen-frequencies. Another exact solution is shown to be absolutely and monotonically stable with respect to infinitesimal disturbances of all frequencies. The flow is also globally, asymptotically, and monotonically stable in the mean with respect o three-dimensional disturbances. This result forms the sound foundation of rigorous stability analysis for nonparallel flows, and provides an invaluable test ground for future studies of nonparallel flows in which the basic states do not posses exact solutions. The application of this method to the study of the formation of spiral vorticies near the nose of a rotating body of revolution is underway. The same method will be applied to the stability analysis of reversed flow over a plate with suction.
Lagrangian particle method for compressible fluid dynamics
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
Determination system for solar cell layout in traffic light network using dominating set
NASA Astrophysics Data System (ADS)
Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin
2018-04-01
Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.
A continuous scale-space method for the automated placement of spot heights on maps
NASA Astrophysics Data System (ADS)
Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico
2017-12-01
Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.
Iammarino, Marco; Dell'Oro, Daniela; Bortone, Nicola; Mangiacotti, Michele; Chiaravalle, Antonio Eugenio
2018-03-31
Strontium-90 (90Sr) is a fission product, resulting from the use of uranium and plutonium in nuclear reactors and weapons. Consequently, it may be found in the environment as a consequence of nuclear fallouts, nuclear weapon testing, and not correct waste management. When present in the environment, strontium-90 may be taken into animal body by drinking water, eating food, or breathing air. The primary health effects are bone tumors and tumors of the blood-cell forming organs, due to beta particles emitted by both 90Sr and yttrium-90 (90Y). Moreover, another health concern is represented by inhibition of calcification and bone deformities in animals. Actually, radiometric methods for the determination of 90Sr in animal bones are lacking. This article describers a radiochemical method for the determination of 90Sr in animal bones, by ultra low-level liquid scintillation counting. The method precision and trueness have been demonstrated through validation tests (CV% = 12.4%; mean recovery = 98.4%). Detection limit and decision threshold corresponding to 8 and 3 mBecquerel (Bq) kg-1, respectively, represent another strong point of this analytical procedure. This new radiochemical method permits the selective extraction of 90Sr, without interferences, and it is suitable for radiocontamination surveillance programs, and it is also an improvement with respect to food safety controls.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
An Ensemble of Neural Networks for Stock Trading Decision Making
NASA Astrophysics Data System (ADS)
Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming
Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.
All about Eve: Secret Sharing using Quantum Effects
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
2005-01-01
This document discusses the nature of light (including classical light and photons), encryption, quantum key distribution (QKD), light polarization and beamsplitters and their application to information communication. A quantum of light represents the smallest possible subdivision of radiant energy (light) and is called a photon. The QKD key generation sequence is outlined including the receiver broadcasting the initial signal indicating reception availability, timing pulses from the sender to provide reference for gated detection of photons, the sender generating photons through random polarization while the receiver detects photons with random polarization and communicating via data link to mutually establish random keys. The QKD network vision includes inter-SATCOM, point-to-point Gnd Fiber and SATCOM-fiber nodes. QKD offers an unconditionally secure method of exchanging encryption keys. Ongoing research will focus on how to increase the key generation rate.
Improved method for selection of the NOAEL.
Calabrese, E J; Baldwin, L A
1994-02-01
The paper proposes that the NOAEL be defined as the highest dosage tested that is statistically significantly different from the control group while also being statistically significantly different from the LOAEL. This new definition requires that the NOAEL be defined from two points of reference rather than the current approach (i.e., single point of reference) in which the NOAEL represents only the highest dosage not statistically significantly different from the control group. This proposal is necessary in order to differentiate NOAELs which are statistically distinguishable from the LOAEL. Under the new regime only those satisfying both criteria would be designated a true NOAEL while those satisfying only one criteria (i.e., not statistically significant different from the control group) would be designated a "quasi" NOAEL and handled differently (i.e., via an uncertainty factor) for risk assessment purposes.
Implicit assimilation for marine ecological models
NASA Astrophysics Data System (ADS)
Weir, B.; Miller, R.; Spitz, Y. H.
2012-12-01
We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.
Digitally enhanced homodyne interferometry.
Sutton, Andrew J; Gerberding, Oliver; Heinzel, Gerhard; Shaddock, Daniel A
2012-09-24
We present two variations of a novel interferometry technique capable of simultaneously measuring multiple targets with high sensitivity. The technique performs a homodyne phase measurement by application of a four point phase shifting algorithm, with pseudo-random switching between points to allow multiplexed measurement based upon propagation delay alone. By multiplexing measurements and shifting complexity into signal processing, both variants realise significant complexity reductions over comparable methods. The first variant performs a typical coherent detection with a dedicated reference field and achieves a displacement noise floor 0.8 pm/√Hz above 50 Hz. The second allows for removal of the dedicated reference, resulting in further simplifications and improved low frequency performance with a 1 pm/√Hz noise floor measured down to 20 Hz. These results represent the most sensitive measurement performed using this style of interferometry whilst simultaneously reducing the electro-optic footprint.
Erratum: A Comparison of Closures for Stochastic Advection-Diffusion Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kenneth D.; Tartakovsky, Alexandre M.
2015-01-01
This note corrects an error in the authors' article [SIAM/ASA J. Uncertain. Quantif., 1 (2013), pp. 319 347] in which the cited work [Neuman, Water Resour. Res., 29(3) (1993), pp. 633 645] was incorrectly represented and attributed. Concentration covariance equations presented in our article as new were in fact previously derived in the latter work. In the original abstract, the phrase " . . .we propose a closed-form approximation to two-point covariance as a measure of uncertainty. . ." should be replaced by the phrase " . . .we study a closed-form approximation to two-point covariance, previously derived in [Neumanmore » 1993], as a measure of uncertainty." The primary results in our article--the analytical and numerical comparison of existing closure methods for specific example problems are not changed by this correction.« less
NASA Astrophysics Data System (ADS)
Steenhuisen, Frits; Wilson, Simon J.
2015-07-01
Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
Accuracy limit of rigid 3-point water models
NASA Astrophysics Data System (ADS)
Izadi, Saeed; Onufriev, Alexey V.
2016-08-01
Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonda, Kohsuke, E-mail: gonda@med.tohoku.ac.jp; Miyashita, Minoru; Watanabe, Mika
2012-09-28
Highlights: Black-Right-Pointing-Pointer Organic fluorescent material-assembled nanoparticles for IHC were prepared. Black-Right-Pointing-Pointer New nanoparticle fluorescent intensity was 10.2-fold greater than Qdot655. Black-Right-Pointing-Pointer Nanoparticle staining analyzed a wide range of ER expression levels in tissue. Black-Right-Pointing-Pointer Nanoparticle staining enhanced the quantitative sensitivity for ER diagnosis. -- Abstract: The detection of estrogen receptors (ERs) by immunohistochemistry (IHC) using 3,3 Prime -diaminobenzidine (DAB) is slightly weak as a prognostic marker, but it is essential to the application of endocrine therapy, such as antiestrogen tamoxifen-based therapy. IHC using DAB is a poor quantitative method because horseradish peroxidase (HRP) activity depends on reaction time, temperature andmore » substrate concentration. However, IHC using fluorescent material provides an effective method to quantitatively use IHC because the signal intensity is proportional to the intensity of the photon excitation energy. However, the high level of autofluorescence has impeded the development of quantitative IHC using fluorescence. We developed organic fluorescent material (tetramethylrhodamine)-assembled nanoparticles for IHC. Tissue autofluorescence is comparable to the fluorescence intensity of quantum dots, which are the most representative fluorescent nanoparticles. The fluorescent intensity of our novel nanoparticles was 10.2-fold greater than quantum dots, and they did not bind non-specifically to breast cancer tissues due to the polyethylene glycol chain that coated their surfaces. Therefore, the fluorescent intensity of our nanoparticles significantly exceeded autofluorescence, which produced a significantly higher signal-to-noise ratio on IHC-imaged cancer tissues than previous methods. Moreover, immunostaining data from our nanoparticle fluorescent IHC and IHC with DAB were compared in the same region of adjacent tissues sections to quantitatively examine the two methods. The results demonstrated that our nanoparticle staining analyzed a wide range of ER expression levels with higher accuracy and quantitative sensitivity than DAB staining. This enhancement in the diagnostic accuracy and sensitivity for ERs using our immunostaining method will improve the prediction of responses to therapies that target ERs and progesterone receptors that are induced by a downstream ER signal.« less
Probing Chemical Space with Alkaloid-Inspired Libraries
McLeod, Michael C.; Singh, Gurpreet; Plampin, James N.; Rane, Digamber; Wang, Jenna L.; Day, Victor W.; Aubé, Jeffrey
2014-01-01
Screening of small molecule libraries is an important aspect of probe and drug discovery science. Numerous authors have suggested that bioactive natural products are attractive starting points for such libraries, due to their structural complexity and sp3-rich character. Here, we describe the construction of a screening library based on representative members of four families of biologically active alkaloids (Stemonaceae, the structurally related cyclindricine and lepadiformine families, lupin, and Amaryllidaceae). In each case, scaffolds were based on structures of the naturally occurring compounds or a close derivative. Scaffold preparation was pursued following the development of appropriate enabling chemical methods. Diversification provided 686 new compounds suitable for screening. The libraries thus prepared had structural characteristics, including sp3 content, comparable to a basis set of representative natural products and were highly rule-of-five compliant. PMID:24451589
Hydrodynamic characterization of soils within a representative watershed in northeast Brazil
NASA Astrophysics Data System (ADS)
Sales, E. G.; Almeida, C. D. N.; Farias, A. S.; Coelho, V. H. R.
2014-09-01
Studies about the infiltration of water in the soil, based on hydraulic conductivity and retention curve, are important to simulate hydrological processes and pollution fluxes. This paper aims to present the hydrodynamic soil behaviour of the Gramame watershed, located in northeast Brazil. This basin is representative of several other watersheds located on the coastal region of northeast Brazil, where sugarcane crops constitute the main land use. For this study, three different land uses and land covers were considered: sugarcane crops, pineapple crops and Atlantic Forest, which is the native forest of this region. The Beerkan method and the BEST program were used in order to get retention and hydraulic conductivity curves. The results show that the highest values of hydraulic conductivity were obtained at points located in native vegetation and deforestation impacts the soil hydrodynamic characteristics.
Arndt, Annette; Steinestel, Konrad; Rump, Alexis; Sroya, Manveer; Bogdanova, Tetiana; Kovgan, Leonila; Port, Matthias; Abend, Michael; Eder, Stefan
2018-04-06
Childhood radiation exposure has been associated with increased papillary thyroid carcinoma (PTC) risk. The role of anaplastic lymphoma kinase (ALK) gene rearrangements in radiation-related PTC remains unclear, but STRN-ALK fusions have recently been detected in PTCs from radiation exposed persons after Chernobyl using targeted next-generation sequencing and RNA-seq. We investigated ALK and RET gene rearrangements as well as known driver point mutations in PTC tumours from 77 radiation-exposed patients (mean age at surgery 22.4 years) and PTC tumours from 19 non-exposed individuals after the Chernobyl accident. ALK rearrangements were detected by fluorescence in situ hybridisation (FISH) and confirmed with immunohistochemistry (IHC); point mutations in the BRAF and RAS genes were detected by DNA pyrosequencing. Among the 77 tumours from exposed persons, we identified 7 ALK rearrangements and none in the unexposed group. When combining ALK and RET rearrangements, we found 24 in the exposed (31.2%) compared to two (10.5%) in the unexposed group. Odds ratios increased significantly in a dose-dependent manner up to 6.2 (95%CI: 1.1, 34.7; p = 0.039) at Iodine-131 thyroid doses >500 mGy. In total, 27 cases carried point mutations of BRAF or RAS genes, yet logistic regression analysis failed to identify significant dose association. To our knowledge we are the first to describe ALK rearrangements in post-Chernobyl PTC samples using routine methods such as FISH and IHC. Our findings further support the hypothesis that gene rearrangements, but not oncogenic driver mutations, are associated with ionizing radiation-related tumour risk. IHC may represent an effective method for ALK-screening in PTCs with known radiation aetiology, which is of clinical value since oncogenic ALK activation might represent a valuable target for small molecule inhibitors. © 2018 The Authors The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.
Kanamori, Shogo; Castro, Marcia C; Sow, Seydou; Matsuno, Rui; Cissokho, Alioune; Jimba, Masamine
2016-01-01
The 5S method is a lean management tool for workplace organization, with 5S being an abbreviation for five Japanese words that translate to English as Sort, Set in Order, Shine, Standardize, and Sustain. In Senegal, the 5S intervention program was implemented in 10 health centers in two regions between 2011 and 2014. To identify the impact of the 5S intervention program on the satisfaction of clients (patients and caretakers) who visited the health centers. A standardized 5S intervention protocol was implemented in the health centers using a quasi-experimental separate pre-post samples design (four intervention and three control health facilities). A questionnaire with 10 five-point Likert items was used to measure client satisfaction. Linear regression analysis was conducted to identify the intervention's effect on the client satisfaction scores, represented by an equally weighted average of the 10 Likert items (Cronbach's alpha=0.83). Additional regression analyses were conducted to identify the intervention's effect on the scores of each Likert item. Backward stepwise linear regression ( n= 1,928) indicated a statistically significant effect of the 5S intervention, represented by an increase of 0.19 points in the client satisfaction scores in the intervention group, 6 to 8 months after the intervention ( p= 0.014). Additional regression analyses showed significant score increases of 0.44 ( p= 0.002), 0.14 ( p= 0.002), 0.06 ( p= 0.019), and 0.17 ( p= 0.044) points on four items, which, respectively were healthcare staff members' communication, explanations about illnesses or cases, and consultation duration, and clients' overall satisfaction. The 5S has the potential to improve client satisfaction at resource-poor health facilities and could therefore be recommended as a strategic option for improving the quality of healthcare service in low- and middle-income countries. To explore more effective intervention modalities, further studies need to address the mechanisms by which 5S leads to attitude changes in healthcare staff.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Devlin, P; Bhagwat, M
Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
NASA Astrophysics Data System (ADS)
Liu, Quan; Grant, Gerald; Li, Jianjun; Zhang, Yan; Hu, Fangyao; Li, Shuqin; Wilson, Christy; Chen, Kui; Bigner, Darell; Vo-Dinh, Tuan
2011-03-01
We report the development of a compact point-detection fluorescence spectroscopy system and two data analysis methods to quantify the intrinsic fluorescence redox ratio and diagnose brain cancer in an orthotopic brain tumor rat model. Our system employs one compact cw diode laser (407 nm) to excite two primary endogenous fluorophores, reduced nicotinamide adenine dinucleotide, and flavin adenine dinucleotide. The spectra were first analyzed using a spectral filtering modulation method developed previously to derive the intrinsic fluorescence redox ratio, which has the advantages of insensitivty to optical coupling and rapid data acquisition and analysis. This method represents a convenient and rapid alternative for achieving intrinsic fluorescence-based redox measurements as compared to those complicated model-based methods. It is worth noting that the method can also extract total hemoglobin concentration at the same time but only if the emission path length of fluorescence light, which depends on the illumination and collection geometry of the optical probe, is long enough so that the effect of absorption on fluorescence intensity due to hemoglobin is significant. Then a multivariate method was used to statistically classify normal tissues and tumors. Although the first method offers quantitative tissue metabolism information, the second method provides high overall classification accuracy. The two methods provide complementary capabilities for understanding cancer development and noninvasively diagnosing brain cancer. The results of our study suggest that this portable system can be potentially used to demarcate the elusive boundary between a brain tumor and the surrounding normal tissue during surgical resection.
A geostatistical approach to predicting sulfur content in the Pittsburgh coal bed
Watson, W.D.; Ruppert, L.F.; Bragg, L.J.; Tewalt, S.J.
2001-01-01
The US Geological Survey (USGS) is completing a national assessment of coal resources in the five top coal-producing regions in the US. Point-located data provide measurements on coal thickness and sulfur content. The sample data and their geologic interpretation represent the most regionally complete and up-to-date assessment of what is known about top-producing US coal beds. The sample data are analyzed using a combination of geologic and Geographic Information System (GIS) models to estimate tonnages and qualities of the coal beds. Traditionally, GIS practitioners use contouring to represent geographical patterns of "similar" data values. The tonnage and grade of coal resources are then assessed by using the contour lines as references for interpolation. An assessment taken to this point is only indicative of resource quantity and quality. Data users may benefit from a statistical approach that would allow them to better understand the uncertainty and limitations of the sample data. To develop a quantitative approach, geostatistics were applied to the data on coal sulfur content from samples taken in the Pittsburgh coal bed (located in the eastern US, in the southwestern part of the state of Pennsylvania, and in adjoining areas in the states of Ohio and West Virginia). Geostatistical methods that account for regional and local trends were applied to blocks 2.7 mi (4.3 km) on a side. The data and geostatistics support conclusions concerning the average sulfur content and its degree of reliability at regional- and economic-block scale over the large, contiguous part of the Pittsburgh outcrop, but not to a mine scale. To validate the method, a comparison was made with the sulfur contents in sample data taken from 53 coal mines located in the study area. The comparison showed a high degree of similarity between the sulfur content in the mine samples and the sulfur content represented by the geostatistically derived contours. Published by Elsevier Science B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.
2016-08-29
The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less
An analysis of estimation of pulmonary blood flow by the single-breath method
NASA Technical Reports Server (NTRS)
Srinivasan, R.
1986-01-01
The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J. A.
1977-01-01
The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krumeich, F., E-mail: krumeich@inorg.chem.ethz.ch; Mueller, E.; Wepf, R.A.
While HRTEM is the well-established method to characterize the structure of dodecagonal tantalum (vanadium) telluride quasicrystals and their periodic approximants, phase-contrast imaging performed on an aberration-corrected scanning transmission electron microscope (STEM) represents a favorable alternative. The (Ta,V){sub 151}Te{sub 74} clusters, the basic structural unit in all these phases, can be visualized with high resolution. A dependence of the image contrast on defocus and specimen thickness has been observed. In thin areas, the projected crystal potential is basically imaged with either dark or bright contrast at two defocus values close to Scherzer defocus as confirmed by image simulations utilizing the principlemore » of reciprocity. Models for square-triangle tilings describing the arrangement of the basic clusters can be derived from such images. - Graphical abstract: PC-STEM image of a (Ta,V){sub 151}Te{sub 74} cluster. Highlights: Black-Right-Pointing-Pointer C{sub s}-corrected STEM is applied for the characterization of dodecagonal quasicrystals. Black-Right-Pointing-Pointer The projected potential of the structure is mirrored in the images. Black-Right-Pointing-Pointer Phase-contrast STEM imaging depends on defocus and thickness. Black-Right-Pointing-Pointer For simulations of phase-contrast STEM images, the reciprocity theorem is applicable.« less
Calculation of a fluctuating entropic force by phase space sampling.
Waters, James T; Kim, Harold D
2015-07-01
A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.
NASA Astrophysics Data System (ADS)
Meng, Yiwen; Hadimani, Ravi; Anantharam, Vellareddy; Kanthasamy, Anumantha; Jiles, David
2015-03-01
Transcranial magnetic stimulation (TMS) has been used to investigate possible treatments for a variety of neurological disorders. However, the effect that magnetic fields have on neurons has not been well documented in the literature. We have investigated the effect of different orientation of magnetic field generated by TMS coils with a monophasic stimulator on the proliferation rate of N27 neuronal cells cultured in flasks and multi-well plates. The proliferation rate of neurons would increase by exposed horizontally adherent N27 cells to a magnetic field pointing upward through the neuronal proliferation layer compared with the control group. On the other hand, proliferation rate would decrease in cells exposed to a magnetic field pointing downward through the neuronal growth layer compared with the control group. We confirmed results obtained from the Trypan-blue and automatic cell counting methods with those from the CyQuant and MTS cell viability assays. Our findings could have important implications for the preclinical development of TMS treatments of neurological disorders and represents a new method to control the proliferation rate of neuronal cells.
Reproducibility of electronic tooth colour measurements.
Ratzmann, Anja; Klinke, Thomas; Schwahn, Christian; Treichel, Anja; Gedrange, Tomasz
2008-10-01
Clinical methods of investigation, such as tooth colour determination, should be simple, quick and reproducible. The determination of tooth colours usually relies upon manual comparison of a patient's tooth colour with a colour ring. After some days, however, measurement results frequently lack unequivocal reproducibility. This study aimed to examine an electronic method for reliable colour measurement. The colours of the teeth 14 to 24 were determined by three different examiners in 10 subjects using the colour measuring device Shade Inspector. In total, 12 measurements per tooth were taken. Two measurement time points were scheduled to be taken, namely at study onset (T(1)) and after 6 months (T(2)). At either time point, two measurement series per subject were taken by the different examiners at 2-week intervals. The inter-examiner and intra-examiner agreement of the measurement results was assessed. The concordance for lightness and colour intensity (saturation) was represented by the intra-class correlation coefficient. The categorical variable colour shade (hue) was assessed using the kappa statistic. The study results show that tooth colour can be measured independently of the examiner. Good agreement was found between the examiners.
Estimation of the lower flammability limit of organic compounds as a function of temperature.
Rowley, J R; Rowley, R L; Wilding, W V
2011-02-15
A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.
Exploring the Faint End of the Luminosity-Metallicity Relation with Hα Dots
NASA Astrophysics Data System (ADS)
Hirschauer, Alec S.; Salzer, John J.
2015-01-01
The well-known correlation between a galaxy's luminosity and its gas-phase oxygen abundance (the luminosity-metallicity (L-Z) relation) offers clues toward our understanding of chemical enrichment histories and evolution. Bright galaxies are comparatively better studied than faint ones, leaving a relative dearth of observational data points to constrain the L-Z relation in the low-luminosity regime. We present high S/N nebular spectroscopy of low-luminosity star-forming galaxies observed with the KPNO 4m using the new KOSMOS spectrograph to derive direct-method metallicities. Our targets are strong point-like emission-line sources discovered serendipitously in continuum-subtracted narrowband images from the ALFALFA Hα survey. Follow-up spectroscopy of these "Hα dots" shows that these objects represent some of the lowest luminosity star-forming systems in the local Universe. Our KOSMOS spectra cover the full optical region and include detection of [O III] λ4363 in roughly a dozen objects. This paper presents some of the first scientific results obtained using this new spectrograph, and demonstrates its capabilities and effectiveness in deriving direct-method metallicities of faint objects.
An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan
NASA Astrophysics Data System (ADS)
Mulia, I. E.; Gusman, A. R.; Satake, K.
2017-12-01
Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our design is a tsunami-based approach, some of the existing observing systems are equipped with additional devices to measure other parameter of interests, i.e., for monitoring seismic activities.
Extracting cross sections and water levels of vegetated ditches from LiDAR point clouds
NASA Astrophysics Data System (ADS)
Roelens, Jennifer; Dondeyne, Stefaan; Van Orshoven, Jos; Diels, Jan
2016-12-01
The hydrologic response of a catchment is sensitive to the morphology of the drainage network. Dimensions of bigger channels are usually well known, however, geometrical data for man-made ditches is often missing as there are many and small. Aerial LiDAR data offers the possibility to extract these small geometrical features. Analysing the three-dimensional point clouds directly will maintain the highest degree of information. A longitudinal and cross-sectional buffer were used to extract the cross-sectional profile points from the LiDAR point cloud. The profile was represented by spline functions fitted through the minimum envelop of the extracted points. The cross-sectional ditch profiles were classified for the presence of water and vegetation based on the normalized difference water index and the spatial characteristics of the points along the profile. The normalized difference water index was created using the RGB and intensity data coupled to the LiDAR points. The mean vertical deviation of 0.14 m found between the extracted and reference cross sections could mainly be attributed to the occurrence of water and partly to vegetation on the banks. In contrast to the cross-sectional area, the extracted width was not influenced by the environment (coefficient of determination R2 = 0.87). Water and vegetation influenced the extracted ditch characteristics, but the proposed method is still robust and therefore facilitates input data acquisition and improves accuracy of spatially explicit hydrological models.
McKenzie, Judith; Braswell, Bob; Jelsma, Jennifer; Naidoo, Nirmala
2011-01-01
Q-methodology was developed to analyse subjective responses to a range of items dealing with specific topics. This article describes the use of Q-methodology and presents the results of a Q-study on perspectives on disability carried out in a training workshop as evidence for its usefulness in disability research. A Q-sort was administered in the context of a training workshop on Q-method. The Q-sort consisted of statements related to the topic of disability. The responses were analysed using specifically developed software to identify factors that represent patterns of responses. Twenty-two of the 23 respondents loaded on four factors. These factors appeared to represent different paradigms relating to the social, medical and disability rights models of disability. The fourth factor appeared to be that of a family perspective. These are all models evident in the disability research literature and provide evidence for the validity of Q-method in disability research. Based on this opportunistic study, it would appear that Q-methodology is a useful tool for identifying different view points related to disability.
Combining clinical and genomics queries using i2b2 – Three methods
Murphy, Shawn N.; Avillach, Paul; Bellazzi, Riccardo; Phillips, Lori; Gabetta, Matteo; Eran, Alal; McDuffie, Michael T.; Kohane, Isaac S.
2017-01-01
We are fortunate to be living in an era of twin biomedical data surges: a burgeoning representation of human phenotypes in the medical records of our healthcare systems, and high-throughput sequencing making rapid technological advances. The difficulty representing genomic data and its annotations has almost by itself led to the recognition of a biomedical “Big Data” challenge, and the complexity of healthcare data only compounds the problem to the point that coherent representation of both systems on the same platform seems insuperably difficult. We investigated the capability for complex, integrative genomic and clinical queries to be supported in the Informatics for Integrating Biology and the Bedside (i2b2) translational software package. Three different data integration approaches were developed: The first is based on Sequence Ontology, the second is based on the tranSMART engine, and the third on CouchDB. These novel methods for representing and querying complex genomic and clinical data on the i2b2 platform are available today for advancing precision medicine. PMID:28388645
NASA Astrophysics Data System (ADS)
Akbar, Somaieh; Fathianpour, Nader
2016-12-01
The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.
PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera
NASA Astrophysics Data System (ADS)
Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.
2018-01-01
PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.
Non-overlapped P- and S-wave Poynting vectors and their solution by the grid method
NASA Astrophysics Data System (ADS)
Lu, Yongming; Liu, Qiancheng
2018-06-01
The Poynting vector represents the local directional energy flux density of seismic waves in geophysics. It is widely used in elastic reverse time migration to analyze source illumination, suppress low-wavenumber noise, correct for image polarity and extract angle-domain common-image gathers. However, the P- and S-waves are mixed together during wavefield propagation so that the P and S energy fluxes are not clean everywhere, especially at the overlapped points. In this paper, we use a modified elastic-wave equation in which the P and S vector wavefields are naturally separated. Then, we develop an efficient method to evaluate the separable P and S Poynting vectors, respectively, based on the view that the group velocity and phase velocity have the same direction in isotropic elastic media. We furthermore formulate our method using an unstructured mesh-based modeling method named the grid method. Finally, we verify our method using two numerical examples.
Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H.; LUTHARDT, Ralph Gunnar; Kuhn, Katharina
2018-01-01
Abstract The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Material and Methods Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). Results For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was −15.7 to 3.5 μm for LMMs and −3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was −1.3 to 2.3 μm. Conclusion LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still clinically acceptable. LMMs showed very high intra-rater reliability and agreement for all measurement points, indicating high repeatability. PMID:29412364
On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions
NASA Astrophysics Data System (ADS)
Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard
2006-02-01
Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.
Mapping Norway - a Method to Register and Survey the Status of Accessibility
NASA Astrophysics Data System (ADS)
Michaelis, Sven; Bögelsack, Kathrin
2018-05-01
The Norwegian mapping authority has developed a standard method for mapping accessibility mostly for people with limited or no walking abilities in urban and recreational areas. We choose an object-orientated approach where points, lines and polygons represents objects in the environment. All data are stored in a geospatial database, so they can be presented as web map and analyzed using GIS software. By the end of 2016 more than 160 municipalities are mapped using that method. The aim of this project is to establish a national standard for mapping and to provide a geodatabase that shows the status of accessibility throughout Norway. The data provide a useful tool for national statistics, local planning authorities and private users. First results show that accessibility is low and Norway still faces many challenges to meet the government's goals for Universal Design.
Detecting a trend change in cross-border epidemic transmission
NASA Astrophysics Data System (ADS)
Maeno, Yoshiharu
2016-09-01
A method for a system of Langevin equations is developed for detecting a trend change in cross-border epidemic transmission. The equations represent a standard epidemiological SIR compartment model and a meta-population network model. The method analyzes a time series of the number of new cases reported in multiple geographical regions. The method is applicable to investigating the efficacy of the implemented public health intervention in managing infectious travelers across borders. It is found that the change point of the probability of travel movements was one week after the WHO worldwide alert on the SARS outbreak in 2003. The alert was effective in managing infectious travelers. On the other hand, it is found that the probability of travel movements did not change at all for the flu pandemic in 2009. The pandemic did not affect potential travelers despite the WHO alert.
Systems and Methods of Coordination Control for Robot Manipulation
NASA Technical Reports Server (NTRS)
Chang, Chu-Yin (Inventor); English, James (Inventor); Tardella, Neil (Inventor); Bacon, James (Inventor)
2013-01-01
Disclosed herein are systems and methods for controlling robotic apparatus having several movable elements or segments coupled by joints. At least one of the movable elements can include one or more mobile bases, while the others can form one or more manipulators. One of the movable elements can be treated as an end effector for which a certain motion is desired. The end effector may include a tool, for example, or represent a robotic hand (or a point thereon), or one or more of the one or more mobile bases. In accordance with the systems and methods disclosed herein, movement of the manipulator and the mobile base can be controlled and coordinated to effect a desired motion for the end effector. In many cases, the motion can include simultaneously moving the manipulator and the mobile base.
Payne, Andrew C; Andregg, Michael; Kemmish, Kent; Hamalainen, Mark; Bowell, Charlotte; Bleloch, Andrew; Klejwa, Nathan; Lehrach, Wolfgang; Schatz, Ken; Stark, Heather; Marblestone, Adam; Church, George; Own, Christopher S; Andregg, William
2013-01-01
We present "molecular threading", a surface independent tip-based method for stretching and depositing single and double-stranded DNA molecules. DNA is stretched into air at a liquid-air interface, and can be subsequently deposited onto a dry substrate isolated from solution. The design of an apparatus used for molecular threading is presented, and fluorescence and electron microscopies are used to characterize the angular distribution, straightness, and reproducibility of stretched DNA deposited in arrays onto elastomeric surfaces and thin membranes. Molecular threading demonstrates high straightness and uniformity over length scales from nanometers to micrometers, and represents an alternative to existing DNA deposition and linearization methods. These results point towards scalable and high-throughput precision manipulation of single-molecule polymers.
A method of transition conflict resolving in hierarchical control
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz
2016-09-01
The paper concerns the problem of automatic solving of transition conflicts in hierarchical concurrent state machines (also known as UML state machine). Preparing by the designer a formal specification of a behaviour free from conflicts can be very complex. In this paper, it is proposed a method for solving conflicts through transition predicates modification. Partially specified predicates in the nondeterministic diagram are transformed into a symbolic Boolean space, whose points of the space code all possible valuations of transition predicates. Next, all valuations under partial specifications are logically multiplied by a function which represents all possible orthogonal predicate valuations. The result of this operation contains all possible collections of predicates, which under given partial specification make that the original diagram is conflict free and deterministic.
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
On controlling networks of limit-cycle oscillators
NASA Astrophysics Data System (ADS)
Skardal, Per Sebastian; Arenas, Alex
2016-09-01
The control of network-coupled nonlinear dynamical systems is an active area of research in the nonlinear science community. Coupled oscillator networks represent a particularly important family of nonlinear systems, with applications ranging from the power grid to cardiac excitation. Here, we study the control of network-coupled limit cycle oscillators, extending the previous work that focused on phase oscillators. Based on stabilizing a target fixed point, our method aims to attain complete frequency synchronization, i.e., consensus, by applying control to as few oscillators as possible. We develop two types of controls. The first type directs oscillators towards larger amplitudes, while the second does not. We present numerical examples of both control types and comment on the potential failures of the method.
Target matching based on multi-view tracking
NASA Astrophysics Data System (ADS)
Liu, Yahui; Zhou, Changsheng
2011-01-01
A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.
Beyond mind-reading: multi-voxel pattern analysis of fMRI data.
Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V
2006-09-01
A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.
Code of Federal Regulations, 2010 CFR
2010-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...
Code of Federal Regulations, 2013 CFR
2013-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...
Code of Federal Regulations, 2012 CFR
2012-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...
Code of Federal Regulations, 2011 CFR
2011-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...
Code of Federal Regulations, 2014 CFR
2014-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...
Code of Federal Regulations, 2010 CFR
2010-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Finishing Water Subcategory § 463.33 Effluent limitations guidelines representing the degree of effluent reduction...
Code of Federal Regulations, 2010 CFR
2010-07-01
... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Cleaning Water Subcategory § 463.23 Effluent limitations guidelines representing the degree of effluent reduction...
Localization of tumors in various organs, using edge detection algorithms
NASA Astrophysics Data System (ADS)
López Vélez, Felipe
2015-09-01
The edge of an image is a set of points organized in a curved line, where in each of these points the brightness of the image changes abruptly, or has discontinuities, in order to find these edges there will be five different mathematical methods to be used and later on compared with its peers, this is with the aim of finding which of the methods is the one that can find the edges of any given image. In this paper these five methods will be used for medical purposes in order to find which one is capable of finding the edges of a scanned image more accurately than the others. The problem consists in analyzing the following two biomedicals images. One of them represents a brain tumor and the other one a liver tumor. These images will be analyzed with the help of the five methods described and the results will be compared in order to determine the best method to be used. It was decided to use different algorithms of edge detection in order to obtain the results shown below; Bessel algorithm, Morse algorithm, Hermite algorithm, Weibull algorithm and Sobel algorithm. After analyzing the appliance of each of the methods to both images it's impossible to determine the most accurate method for tumor detection due to the fact that in each case the best method changed, i.e., for the brain tumor image it can be noticed that the Morse method was the best at finding the edges of the image but for the liver tumor image it was the Hermite method. Making further observations it is found that Hermite and Morse have for these two cases the lowest standard deviations, concluding that these two are the most accurate method to find the edges in analysis of biomedical images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiefer, H., E-mail: johann.schiefer@kssg.ch; Peters, S.; Plasswilm, L.
Purpose: For stereotactic radiosurgery, the AAPM Report No. 54 [AAPM Task Group 42 (AAPM, 1995)] requires the overall stability of the isocenter (couch, gantry, and collimator) to be within a 1 mm radius. In reality, a rotating system has no rigid axis and thus no isocenter point which is fixed in space. As a consequence, the isocenter concept is reviewed here. It is the aim to develop a measurement method following the revised definitions. Methods: The mechanical isocenter is defined here by the point which rotates on the shortest path in the room coordinate system. The path is labeled asmore » “isocenter path.” Its center of gravity is assumed to be the mechanical isocenter. Following this definition, an image-based and radiation-free measurement method was developed. Multiple marker pairs in a plane perpendicular to the assumed gantry rotation axis of a linear accelerator are imaged with a smartphone application from several rotation angles. Each marker pair represents an independent measuring system. The room coordinates of the isocenter path and the mechanical isocenter are calculated based on the marker coordinates. The presented measurement method is by this means strictly focused on the mechanical isocenter. Results: The measurement result is available virtually immediately following completion of measurement. When 12 independent measurement systems are evaluated, the standard deviations of the isocenter path points and mechanical isocenter coordinates are 0.02 and 0.002 mm, respectively. Conclusions: The measurement is highly accurate, time efficient, and simple to adapt. It is therefore suitable for regular checks of the mechanical isocenter characteristics of the gantry and collimator rotation axis. When the isocenter path is reproducible and its extent is in the range of the needed geometrical accuracy, it should be taken into account in the planning process. This is especially true for stereotactic treatments and radiosurgery.« less
A Lyapunov based approach to energy maximization in renewable energy technologies
NASA Astrophysics Data System (ADS)
Iyasere, Erhun
This dissertation describes the design and implementation of Lyapunov-based control strategies for the maximization of the power captured by renewable energy harnessing technologies such as (i) a variable speed, variable pitch wind turbine, (ii) a variable speed wind turbine coupled to a doubly fed induction generator, and (iii) a solar power generating system charging a constant voltage battery. First, a torque control strategy is presented to maximize wind energy captured in variable speed, variable pitch wind turbines at low to medium wind speeds. The proposed strategy applies control torque to the wind turbine pitch and rotor subsystems to simultaneously control the blade pitch and tip speed ratio, via the rotor angular speed, to an optimum point at which the capture efficiency is maximum. The control method allows for aerodynamic rotor power maximization without exact knowledge of the wind turbine model. A series of numerical results show that the wind turbine can be controlled to achieve maximum energy capture. Next, a control strategy is proposed to maximize the wind energy captured in a variable speed wind turbine, with an internal induction generator, at low to medium wind speeds. The proposed strategy controls the tip speed ratio, via the rotor angular speed, to an optimum point at which the efficiency constant (or power coefficient) is maximal for a particular blade pitch angle and wind speed by using the generator rotor voltage as a control input. This control method allows for aerodynamic rotor power maximization without exact wind turbine model knowledge. Representative numerical results demonstrate that the wind turbine can be controlled to achieve near maximum energy capture. Finally, a power system consisting of a photovoltaic (PV) array panel, dc-to-dc switching converter, charging a battery is considered wherein the environmental conditions are time-varying. A backstepping PWM controller is developed to maximize the power of the solar generating system. The controller tracks a desired array voltage, designed online using an incremental conductance extremum-seeking algorithm, by varying the duty cycle of the switching converter. The stability of the control algorithm is demonstrated by means of Lyapunov analysis. Representative numerical results demonstrate that the grid power system can be controlled to track the maximum power point of the photovoltaic array panel in varying atmospheric conditions. Additionally, the performance of the proposed strategy is compared to the typical maximum power point tracking (MPPT) method of perturb and observe (P&O), where the converter dynamics are ignored, and is shown to yield better results.
Integration of Visual and Joint Information to Enable Linear Reaching Motions
NASA Astrophysics Data System (ADS)
Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu
2017-01-01
A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.