Sample records for point extraction system

  1. GPU surface extraction using the closest point embedding

    NASA Astrophysics Data System (ADS)

    Kim, Mark; Hansen, Charles

    2015-01-01

    Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes

  2. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  3. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  4. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  5. Palmprint verification using Lagrangian decomposition and invariant interest points

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.

    2011-06-01

    This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.

  6. An In vitro Comparison and Evaluation of Sealing Ability of Newly Introduced C-point System, Cold Lateral Condensation, and Thermoplasticized Gutta-Percha Obturating Technique: A Dye Extraction Study.

    PubMed

    Sinhal, Tapati Manohar; Shah, Ruchi Rani Purvesh; Jais, Pratik Subhas; Shah, Nimisha Chinmay; Hadwani, Krupali Dhirubhai; Rothe, Tushar; Sinhal, Neha Nilesh

    2018-01-01

    The aim of this study is to compare and to evaluate sealing ability of newly introduced C-point system, cold lateral condensation, and thermoplasticized gutta-percha obturating technique using a dye extraction method. Sixty extracted maxillary central incisors were decoronated below the cementoenamel junction. Working length was established, and biomechanical preparation was done using K3 rotary files with standard irrigation protocol. Teeth were divided into three groups according to the obturation protocol; Group I-Cold lateral condensation, Group II-Thermoplasticized gutta-percha, and Group III-C-Point obturating system. After obturation all samples were subjected to microleakage assessment using dye extraction method. Obtained scores will be statistical analyzed using ANOVA test and post hoc Tukey's test. One-way analysis of variance revealed that there is significant difference among the three groups with P value (0.000 < 0.05). Tukey's HSD post hoc tests for multiple comparisons test shows that the Group II and III perform significantly better than Group I. Group III performs better than Group II with no significant difference. All the obturating technique showed some degree of microleakage. Root canals filled with C-point system showed least microleakage followed by thermoplasticized obturating technique with no significant difference among them. C-point obturation system could be an alternative to the cold lateral condensation technique.

  7. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  8. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  9. Extractive biodegradation and bioavailability assessment of phenanthrene in the cloud point system by Sphingomonas polyaromaticivorans.

    PubMed

    Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing

    2016-01-01

    The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.

  10. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    PubMed

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  11. Current Nucleic Acid Extraction Methods and Their Implications to Point-of-Care Diagnostics.

    PubMed

    Ali, Nasir; Rampazzo, Rita de Cássia Pontello; Costa, Alexandre Dias Tavares; Krieger, Marco Aurelio

    2017-01-01

    Nucleic acid extraction (NAE) plays a vital role in molecular biology as the primary step for many downstream applications. Many modifications have been introduced to the original 1869 method. Modern processes are categorized into chemical or mechanical, each with peculiarities that influence their use, especially in point-of-care diagnostics (POC-Dx). POC-Dx is a new approach aiming to replace sophisticated analytical machinery with microanalytical systems, able to be used near the patient, at the point of care or point of need . Although notable efforts have been made, a simple and effective extraction method is still a major challenge for widespread use of POC-Dx. In this review, we dissected the working principle of each of the most common NAE methods, overviewing their advantages and disadvantages, as well their potential for integration in POC-Dx systems. At present, it seems difficult, if not impossible, to establish a procedure which can be universally applied to POC-Dx. We also discuss the effects of the NAE chemicals upon the main plastic polymers used to mass produce POC-Dx systems. We end our review discussing the limitations and challenges that should guide the quest for an efficient extraction method that can be integrated in a POC-Dx system.

  12. Current Nucleic Acid Extraction Methods and Their Implications to Point-of-Care Diagnostics

    PubMed Central

    Ali, Nasir; Rampazzo, Rita de Cássia Pontello; Krieger, Marco Aurelio

    2017-01-01

    Nucleic acid extraction (NAE) plays a vital role in molecular biology as the primary step for many downstream applications. Many modifications have been introduced to the original 1869 method. Modern processes are categorized into chemical or mechanical, each with peculiarities that influence their use, especially in point-of-care diagnostics (POC-Dx). POC-Dx is a new approach aiming to replace sophisticated analytical machinery with microanalytical systems, able to be used near the patient, at the point of care or point of need. Although notable efforts have been made, a simple and effective extraction method is still a major challenge for widespread use of POC-Dx. In this review, we dissected the working principle of each of the most common NAE methods, overviewing their advantages and disadvantages, as well their potential for integration in POC-Dx systems. At present, it seems difficult, if not impossible, to establish a procedure which can be universally applied to POC-Dx. We also discuss the effects of the NAE chemicals upon the main plastic polymers used to mass produce POC-Dx systems. We end our review discussing the limitations and challenges that should guide the quest for an efficient extraction method that can be integrated in a POC-Dx system. PMID:28785592

  13. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  14. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  15. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  16. Extractive biodecolorization of triphenylmethane dyes in cloud point system by Aeromonas hydrophila DN322p.

    PubMed

    Pan, Tao; Ren, Suizhou; Xu, Meiying; Sun, Guoping; Guo, Jun

    2013-07-01

    The biological treatment of triphenylmethane dyes is an important issue. Most microbes have limited practical application because they cannot completely detoxicate these dyes. In this study, the extractive biodecolorization of triphenylmethane dyes by Aeromonas hydrophila DN322p was carried out by introducing the cloud point system. The cloud point system is composed of a mixture of nonionic surfactants (20 g/L) Brij 30 and Tergitol TMN-3 in equal proportions. After the decolorization of crystal violet, a higher wet cell weight was obtained in the cloud point system than that of the control system. Based on the results of thin-layer chromatography, the residual crystal violet and its decolorized product, leuco crystal violet, preferred to partition into the coacervate phase. Therefore, the detoxification of the dilute phase was achieved, which indicated that the dilute phase could be discharged without causing dye pollution. The extractive biodecolorization of three other triphenylmethane dyes was also examined in this system. The decolorization of malachite green and brilliant green was similar to that of crystal violet. Only ethyl violet achieved a poor decolorization rate because DN322p decolorized it via adsorption but did not convert it into its leuco form. This study provides potential application of biological treatment in triphenylmethane dye wastewater.

  17. Light extraction block with curved surface

    DOEpatents

    Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.

    2016-03-22

    Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.

  18. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  19. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  20. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  1. Applications of 3D-EDGE Detection for ALS Point Cloud

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.

  2. The effect of antioxidants on quantitative changes of lysine and methionine in linoleic acid emulsions at different pH conditions.

    PubMed

    Hęś, Marzanna; Gliszczyńska-Świgło, Anna; Gramza-Michałowska, Anna

    2017-01-01

    Plants are an important source of phenolic compounds. The antioxidant capacities of green tea, thyme and rosemary extracts that contain these compounds have been reported earlier. However, there is a lack of accessible information about their activity against lipid oxidation in emulsions and inhibit the interaction of lipid oxidation products with amino acids. Therefore, the influence of green tea, thyme and rosemary extracts and BHT (butylated hydroxytoluene) on quantitative changes in lysine and methionine in linoleic acid emulsions at a pH of isoelectric point and a pH lower than the isoelectric point of amino acids was investigated. Total phenolic contents in plant extracts were determined spectrophotometrically by using Folin-Ciocalteu's reagent, and individual phenols by using HPLC. The level of oxidation of emulsion was determined using the measurement of peroxides and TBARS (thiobarbituric acid reactive substances). Methionine and lysine in the system were reacted with sodium nitroprusside and trinitrobenzenesulphonic acid respectively, and the absorbance of the complexes was measured. Extract of green tea had the highest total polyphenol content. The system containing antioxidants and amino acid protected linoleic acid more efficiently than by the addition of antioxidants only. Lysine and methionine losses in samples without the addition of antioxidants were lower in their isoelectric points than below these points. Antioxidants decrease the loss of amino acids. The protective properties of antioxidants towards methionine were higher in a pH of isoelectric point whereas towards lysine in pH below this point. Green tea, thyme and rosemary extracts exhibit antioxidant activity in linoleic acid emulsions. Moreover, they can be utilized to inhibit quantitative changes in amino acids in lipid emulsions. However, the antioxidant efficiency of these extracts seems to depend on pH conditions. Further investigations should be carried out to clarify this issue.

  3. SOLVENT EXTRACTION PROCESS FOR THE SEPARATION OF URANIUM AND THORIUM FROM PROTACTINIUM AND FISSION PRODUCTS

    DOEpatents

    Rainey, R.H.; Moore, J.G.

    1962-08-14

    A liquid-liquid extraction process was developed for recovering thorium and uranium values from a neutron irradiated thorium composition. They are separated from a solvent extraction system comprising a first end extraction stage for introducing an aqueous feed containing thorium and uranium into the system consisting of a plurality of intermediate extractiorr stages and a second end extractron stage for introducing an aqueous immiscible selective organic solvent for thorium and uranium in countercurrent contact therein with the aqueous feed. A nitrate iondeficient aqueous feed solution containing thorium and uranium was introduced into the first end extraction stage in countercurrent contact with the organic solvent entering the system from the second end extraction stage while intro ducing an aqueous solution of salting nitric acid into any one of the intermediate extraction stages of the system. The resultant thorium and uranium-laden organic solvent was removed at a point preceding the first end extraction stage of the system. (AEC)

  4. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    NASA Astrophysics Data System (ADS)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  5. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  6. A digital system for surface reconstruction

    USGS Publications Warehouse

    Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.

    1996-01-01

    A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.

  7. Impact of Surface Active Ionic Liquids on the Cloud Points of Nonionic Surfactants and the Formation of Aqueous Micellar Two-Phase Systems.

    PubMed

    Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P

    2017-09-21

    Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.

  8. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  9. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  10. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  11. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  12. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  13. Cloud point extraction of Δ9-tetrahydrocannabinol from cannabis resin.

    PubMed

    Ameur, S; Haddou, B; Derriche, Z; Canselier, J P; Gourdon, C

    2013-04-01

    A cloud point extraction coupled with high performance liquid chromatography (HPLC/UV) method was developed for the determination of Δ(9)-tetrahydrocannabinol (THC) in micellar phase. The nonionic surfactant "Dowfax 20B102" was used to extract and pre-concentrate THC from cannabis resin, prior to its determination with a HPLC-UV system (diode array detector) with isocratic elution. The parameters and variables affecting the extraction were investigated. Under optimum conditions (1 wt.% Dowfax 20B102, 1 wt.% Na2SO4, T = 318 K, t = 30 min), this method yielded a quite satisfactory recovery rate (~81 %). The limit of detection was 0.04 μg mL(-1), and the relative standard deviation was less than 2 %. Compared with conventional solid-liquid extraction, this new method avoids the use of volatile organic solvents, therefore is environmentally safer.

  14. An online handwriting recognition system for Turkish

    NASA Astrophysics Data System (ADS)

    Vural, Esra; Erdogan, Hakan; Oflazer, Kemal; Yanikoglu, Berrin A.

    2004-12-01

    Despite recent developments in Tablet PC technology, there has not been any applications for recognizing handwritings in Turkish. In this paper, we present an online handwritten text recognition system for Turkish, developed using the Tablet PC interface. However, even though the system is developed for Turkish, the addressed issues are common to online handwriting recognition systems in general. Several dynamic features are extracted from the handwriting data for each recorded point and Hidden Markov Models (HMM) are used to train letter and word models. We experimented with using various features and HMM model topologies, and report on the effects of these experiments. We started with first and second derivatives of the x and y coordinates and relative change in the pen pressure as initial features. We found that using two more additional features, that is, number of neighboring points and relative heights of each point with respect to the base-line improve the recognition rate. In addition, extracting features within strokes and using a skipping state topology improve the system performance as well. The improved system performance is 94% in recognizing handwritten words from a 1000-word lexicon.

  15. An online handwriting recognition system for Turkish

    NASA Astrophysics Data System (ADS)

    Vural, Esra; Erdogan, Hakan; Oflazer, Kemal; Yanikoglu, Berrin A.

    2005-01-01

    Despite recent developments in Tablet PC technology, there has not been any applications for recognizing handwritings in Turkish. In this paper, we present an online handwritten text recognition system for Turkish, developed using the Tablet PC interface. However, even though the system is developed for Turkish, the addressed issues are common to online handwriting recognition systems in general. Several dynamic features are extracted from the handwriting data for each recorded point and Hidden Markov Models (HMM) are used to train letter and word models. We experimented with using various features and HMM model topologies, and report on the effects of these experiments. We started with first and second derivatives of the x and y coordinates and relative change in the pen pressure as initial features. We found that using two more additional features, that is, number of neighboring points and relative heights of each point with respect to the base-line improve the recognition rate. In addition, extracting features within strokes and using a skipping state topology improve the system performance as well. The improved system performance is 94% in recognizing handwritten words from a 1000-word lexicon.

  16. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  17. Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications

    NASA Astrophysics Data System (ADS)

    Thanaborvornwiwat, N.; Patanukhom, K.

    2018-04-01

    Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.

  18. An information extraction framework for cohort identification using electronic health records.

    PubMed

    Liu, Hongfang; Bielinski, Suzette J; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B; Jonnalagadda, Siddhartha R; Ravikumar, K E; Wu, Stephen T; Kullo, Iftikhar J; Chute, Christopher G

    2013-01-01

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework.

  19. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  20. Three-Dimensional Reconstruction of the Virtual Plant Branching Structure Based on Terrestrial LIDAR Technologies and L-System

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yang, Y.; Yang, X.

    2018-04-01

    For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.

  1. Application of Magnetic Nanoparticles in Pretreatment Device for POPs Analysis in Water

    NASA Astrophysics Data System (ADS)

    Chu, Dongzhi; Kong, Xiangfeng; Wu, Bingwei; Fan, Pingping; Cao, Xuan; Zhang, Ting

    2018-01-01

    In order to reduce process time and labour force of POPs pretreatment, and solve the problem that extraction column was easily clogged, the paper proposed a new technology of extraction and enrichment which used magnetic nanoparticles. Automatic pretreatment system had automatic sampling unit, extraction enrichment unit and elution enrichment unit. The paper briefly introduced the preparation technology of magnetic nanoparticles, and detailly introduced the structure and control system of automatic pretreatment system. The result of magnetic nanoparticles mass recovery experiments showed that the system had POPs analysis preprocessing capability, and the recovery rate of magnetic nanoparticles were over 70%. In conclusion, the author proposed three points optimization recommendation.

  2. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  3. Application of an aqueous two-phase micellar system to extract bromelain from pineapple (Ananas comosus) peel waste and analysis of bromelain stability in cosmetic formulations.

    PubMed

    Spir, Lívia Genovez; Ataide, Janaína Artem; De Lencastre Novaes, Letícia Celia; Moriel, Patrícia; Mazzola, Priscila Gava; De Borba Gurpilhares, Daniela; Silveira, Edgar; Pessoa, Adalberto; Tambourgi, Elias Basile

    2015-01-01

    Bromelain is a set of proteolytic enzymes found in pineapple (Ananas comosus) tissues such as stem, fruit and leaves. Because of its proteolytic activity, bromelain has potential applications in the cosmetic, pharmaceutical, and food industries. The present study focused on the recovery of bromelain from pineapple peel by liquid-liquid extraction in aqueous two-phase micellar systems (ATPMS), using Triton X-114 (TX-114) and McIlvaine buffer, in the absence and presence of electrolytes CaCl2 and KI; the cloud points of the generated extraction systems were studied by plotting binodal curves. Based on the cloud points, three temperatures were selected for extraction: 30, 33, and 36°C for systems in the absence of salts; 40, 43, and 46°C in the presence of KI; 24, 27, and 30°C in the presence of CaCl2 . Total protein and enzymatic activities were analyzed to monitor bromelain. Employing the ATPMS chosen for extraction (0.5 M KI with 3% TX-114, at pH 6.0, at 40°C), the bromelain extract stability was assessed after incorporation into three cosmetic bases: an anhydrous gel, a cream, and a cream-gel formulation. The cream-gel formulation presented as the most appropriate base to convey bromelain, and its optimal storage conditions were found to be 4.0 ± 0.5°C. The selected ATPMS enabled the extraction of a biomolecule with high added value from waste lined-up in a cosmetic formulation, allowing for exploration of further cosmetic potential. © 2015 American Institute of Chemical Engineers.

  4. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  5. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  6. Fast title extraction method for business documents

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Naoi, Satoshi

    1997-04-01

    Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.

  7. Calibration of Viking imaging system pointing, image extraction, and optical navigation measure

    NASA Technical Reports Server (NTRS)

    Breckenridge, W. G.; Fowler, J. W.; Morgan, E. M.

    1977-01-01

    Pointing control and knowledge accuracy of Viking Orbiter science instruments is controlled by the scan platform. Calibration of the scan platform and the imaging system was accomplished through mathematical models. The calibration procedure and results obtained for the two Viking spacecraft are described. Included are both ground and in-flight scan platform calibrations, and the additional calibrations unique to optical navigation.

  8. Automatic extraction of the mid-sagittal plane using an ICP variant

    NASA Astrophysics Data System (ADS)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  9. Correlation between quarter-point angle and nuclear radius

    NASA Astrophysics Data System (ADS)

    Ma, Wei-Hu; Wang, Jian-Song; Mukherjee, S.; Wang, Qi; Patel, D.; Yang, Yan-Yun; Ma, Jun-Bing; Ma, Peng; Jin, Shi-Lun; Bai, Zhen; Liu, Xing-Quan

    2017-04-01

    The correlation between quarter-point angle of elastic scattering and nuclear matter radius is studied systematically. Various phenomenological formulae with parameters for nuclear radius are adopted and compared by fitting the experimental data of quarter point angle extracted from nuclear elastic scattering reaction systems. A parameterized formula related to binding energy is recommended, which gives a good reproduction of nuclear matter radii of halo nuclei. It indicates that the quarter-point angle of elastic scattering is quite sensitive to the nuclear matter radius and can be used to extract the nuclear matter radius. Supported by National Natural Science Foundation of China (U1432247, 11575256), National Basic Research Program of China (973 Program)(2014CB845405 and 2013CB83440x) and (SM) Chinese Academy of Sciences President’s International Fellowship Initiative (2015-FX-04)

  10. Wave power focusing due to the Bragg resonance

    NASA Astrophysics Data System (ADS)

    Tao, Ai-feng; Yan, Jin; Wang, Yi; Zheng, Jin-hai; Fan, Jun; Qin, Chuan

    2017-08-01

    Wave energy has drawn much attention as an achievable way to exploit the renewable energy. At present, in order to enhance the wave energy extraction, most efforts have been concentrated on optimizing the wave energy convertor and the power take-off system mechanically and electrically. However, focusing the wave power in specific wave field could also be an alternative to improve the wave energy extraction. In this experimental study, the Bragg resonance effect is applied to focus the wave energy. Because the Bragg resonance effect of the rippled bottom largely amplifies the wave reflection, leading to a significant increase of wave focusing. Achieved with an energy conversion system consisting of a point absorber and a permanent magnet single phase linear motor, the wave energy extracted in the wave flume with and without Bragg resonance effect was measured and compared quantitatively in experiment. It shows that energy extraction by a point absorber from a standing wave field resulted from Bragg resonance effect can be remarkably increased compared with that from a propagating wave field (without Bragg resonance effect).

  11. An Information Extraction Framework for Cohort Identification Using Electronic Health Records

    PubMed Central

    Liu, Hongfang; Bielinski, Suzette J.; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B.; Jonnalagadda, Siddhartha R.; Ravikumar, K.E.; Wu, Stephen T.; Kullo, Iftikhar J.; Chute, Christopher G

    Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework. PMID:24303255

  12. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  13. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  14. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  15. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  16. Study of the Tributyl Phosphate--30 Percent Dodecane Solvent; ETUDE DU SOLVANT PHOSPHATE TRIBUTYLIQUE 30 PERCENT--DODECANE (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroy, P.

    1967-07-01

    This study, originating mainly from a literature survey, gives the principal chemical and physical features of the tributyl-phosphate (TBP) agent diluted at 30 volumes per cent in dodecane. The mixture is a very commonly used extractant in nuclear fuel processing. In this paper, the main following points are reported: -) the components (TBP and diluents) -) the TBP-diluents systems (non-loaded), -) the TBP-diluents-water systems, -) TBP-diluents-water-nitric acid systems, and -) industrial solvents. (author) [French] Cette etude, d'origine bibliographique, regroupe les caracteristiques physico-chimiques essentielles du phosphate tributylique (TBP) dilue a 30% en volume dans du dodecane. Ce melange constitue un agentmore » d'extraction tres utilise dans le traitement des combustibles nucleaires. Les principaux points traites sont les suivants: -) les constituants (TBP et diluants), -) les systemes TBP-diluants non charges, -) les systemes TBP-diluants-eau, -) les systemes TBP-diluants-eau-acide nitrique, et -) les solvants industriels. (auteur)« less

  17. Single shot laser speckle based 3D acquisition system for medical applications

    NASA Astrophysics Data System (ADS)

    Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young

    2018-06-01

    The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.

  18. Alcohol based-deep eutectic solvent (DES) as an alternative green additive to increase rotenone yield

    NASA Astrophysics Data System (ADS)

    Othman, Zetty Shafiqa; Hassan, Nur Hasyareeda; Zubairi, Saiful Irwan

    2015-09-01

    Deep eutectic solvents (DESs) are basically molten salts that interact by forming hydrogen bonds between two added components at a ratio where eutectic point reaches a melting point lower than that of each individual component. Their remarkable physicochemical properties (similar to ionic liquids) with remarkable green properties, low cost and easy handling make them a growing interest in many fields of research. Therefore, the objective of pursuing this study is to analyze the potential of alcohol-based DES as an extraction medium for rotenone extraction from Derris elliptica roots. DES was prepared by a combination of choline chloride, ChCl and 1, 4-butanediol at a ratio of 1/5. The structure of elucidation of DES was analyzed using FTIR, 1H-NMR and 13C-NMR. Normal soaking extraction (NSE) method was carried out for 14 hours using seven different types of solvent systems of (1) acetone; (2) methanol; (3) acetonitrile; (4) DES; (5) DES + methanol; (6) DES + acetonitrile; and (7) [BMIM] OTf + acetone. Next, the yield of rotenone, % (w/w), and its concentration (mg/ml) in dried roots were quantitatively determined by means of RP-HPLC. The results showed that a binary solvent system of [BMIM] OTf + acetone and DES + acetonitrile was the best solvent system combination as compared to other solvent systems. It contributed to the highest rotenone content of 0.84 ± 0.05% (w/w) (1.09 ± 0.06 mg/ml) and 0.84 ± 0.02% (w/w) (1.03 ± 0.01 mg/ml) after 14 hours of exhaustive extraction time. In conclusion, a combination of the DES with a selective organic solvent has been proven to have a similar potential and efficiency as of ILs in extracting bioactive constituents in the phytochemical extraction process.

  19. Low frequency sonic waves assisted cloud point extraction of polyhydroxyalkanoate from Cupriavidus necator.

    PubMed

    Murugesan, Sivananth; Iyyaswami, Regupathi

    2017-08-15

    Low frequency sonic waves, less than 10kHz were introduced to assist cloud point extraction of polyhydroxyalkanoate from Cupriavidus necator present within the crude broth. Process parameters including surfactant system variables and sonication parameters were studied for their effect on extraction efficiency. Introduction of low frequency sonic waves assists in the dissolution of microbial cell wall by the surfactant micelles and release of cellular content, polyhydroxyalkanoate granules released were encapsulated by the micelle core which was confirmed by crotonic acid assay. In addition, sonic waves resulted in the separation of homogeneous surfactant and broth mixture into two distinct phases, top aqueous phase and polyhydroxyalkanoate enriched bottom surfactant rich phase. Mixed surfactant systems showed higher extraction efficiency compared to that of individual Triton X-100 concentrations, owing to increase in the hydrophobicity of the micellar core and its interaction with polyhydroxyalkanoate. Addition of salts to the mixed surfactant system induces screening of charged surfactant head groups and reduces inter-micellar repulsion, presence of ammonium ions lead to electrostatic repulsion and weaker cation sodium enhances the formation of micellar network. Addition of polyethylene glycol 8000 resulted in increasing interaction with the surfactant tails of the micelle core there by reducing the purity of polyhydroxyalkanoate. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Radiative heat transfer enhancement using geometric and spectral control for achieving high-efficiency solar-thermophotovoltaic systems

    NASA Astrophysics Data System (ADS)

    Kohiyama, Asaka; Shimizu, Makoto; Yugami, Hiroo

    2018-04-01

    We numerically investigate radiative heat transfer enhancement using spectral and geometric control of the absorber/emitter. A high extraction of the radiative heat transfer from the emitter as well as minimization of the optical losses from the absorber leads to high extraction and solar thermophotovoltaic (STPV) system efficiency. The important points for high-efficiency STPV design are discussed for the low and high area ratio of the absorber/emitter. The obtained general guideline will support the design of various types of STPV systems.

  1. Automatic detection of lung vessel bifurcation in thoracic CT images

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Vikal, Siddharth; Devarakota, Pandu

    2011-03-01

    Computer-aided diagnosis (CAD) systems for detection of lung nodules have been an active topic of research for last few years. It is desirable that a CAD system should generate very low false positives (FPs) while maintaining high sensitivity. This work aims to reduce the number of false positives occurring at vessel bifurcation point. FPs occur quite frequently on vessel branching point due to its shape which can appear locally spherical due to the intrinsic geometry of intersecting tubular vessel structures combined with partial volume effects and soft tissue attenuation appearance surrounded by parenchyma. We propose a model-based technique for detection of vessel branching points using skeletonization, followed by branch-point analysis. First we perform vessel structure enhancement using a multi-scale Hessian filter to accurately segment tubular structures of various sizes followed by thresholding to get binary vessel structure segmentation [6]. A modified Reebgraph [7] is applied next to extract the critical points of structure and these are joined by a nearest neighbor criterion to obtain complete skeletal model of vessel structure. Finally, the skeletal model is traversed to identify branch points, and extract metrics including individual branch length, number of branches and angle between various branches. Results on 80 sub-volumes consisting of 60 actual vessel-branching and 20 solitary solid nodules show that the algorithm identified correctly vessel branching points for 57 sub-volumes (95% sensitivity) and misclassified 2 nodules as vessel branch. Thus, this technique has potential in explicit identification of vessel branching points for general vessel analysis, and could be useful in false positive reduction in a lung CAD system.

  2. Sequential cloud-point extraction for toxicological screening analysis of medicaments in human plasma by high pressure liquid chromatography with diode array detector.

    PubMed

    Madej, Katarzyna; Persona, Karolina; Wandas, Monika; Gomółka, Ewa

    2013-10-18

    A complex extraction system with the use of cloud-point extraction technique (CPE) was developed for sequential isolation of basic and acidic/neutral medicaments from human plasma/serum, screened by HPLC/DAD method. Eight model drugs (paracetamol, promazine, chlorpromazine, amitriptyline, salicyclic acid, opipramol, alprazolam and carbamazepine) were chosen for the study of optimal CPE conditions. The CPE technique consists in partition of an aqueous sample with addition of a surfactant into two phases: micelle-rich phase with the isolated compounds and water phase containing a surfactant below the critical micellar concentration, mainly under influence of temperature change. The proposed extraction system consists of two chief steps: isolation of basic compounds (from pH 12) and then isolation of acidic/neutral compounds (from pH 6) using surfactant Triton X-114 as the extraction medium. Extraction recovery varied from 25.2 to 107.9% with intra-day and inter-day precision (RSD %) ranged 0.88-1087 and 5.32-17.96, respectively. The limits of detection for the studied medicaments at λ 254nm corresponded to therapeutic or low toxic plasma concentration levels. Usefulness of the proposed CPE-HPLC/DAD method for toxicological drug screening was tested via its application to analysis of two serum samples taken from patients suspected of drug overdosing. Published by Elsevier B.V.

  3. Automatic River Network Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  4. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    USGS Publications Warehouse

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  5. Effect of four different intracanal medicaments on the apical seal of the root canal system: a dye extraction study.

    PubMed

    Tandan, Monika; Hegde, Mithra N; Hegde, Priyadarshini

    2014-01-01

    The aim was to determine the effect of four different intracanal medicaments on the apical seal of the root canal system in vitro. Fifty freshly extracted intact human permanent maxillary central incisors were collected, stored and disinfected. The root canals were prepared to a master apical size of number 50 using step back technique. Depending upon the intracanal medicament used, the teeth were divided randomly into five groups of 10 teeth each including one control group and four experimental groups. Group A: No intracanal medicament. Group B: Calcium hydroxide powder mixed with distilled water. Group C: Calcium hydroxide gutta percha points (calcium hydroxide points). Group D: 1% chlorhexidine gel (hexigel). Group E: Chlorhexidine gutta percha points (Roeko Activ Points). The medication was left in canals for 14 days. Following removal of the intracanal medicament, all the groups were obturated with lateral compaction technique. The apical leakage was then evaluated using dye extraction method with the help of a spectrophotometer. RESULTS were statistically analyzed using Kruskal-Wallis and Mann-Whitney U-test, which showed statistically significant difference among the five groups tested. It can be concluded from this study that the control group showed least amount of leakage, whereas the 1% chlorhexidine gel group showed maximum amount of leakage. Apical leakage was observed with all the experimental groups with little variations in between them. Under the parameters of this study, it can be concluded that use of intracanal medicaments during endodontic treatment has a definite impact on the apical seal of the root canal system.

  6. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  7. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  8. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  9. Airborne LiDAR : a new source of traffic flow data.

    DOT National Transportation Integrated Search

    2005-10-01

    LiDAR (or airborne laser scanning) systems became a dominant player in high-precision spatial data acquisition : to efficiently create DEM/DSM in the late 90's. With increasing point density, new systems are now able to : support object extraction, s...

  10. Airborne LiDAR : a new source of traffic flow data.

    DOT National Transportation Integrated Search

    2005-10-01

    LiDAR (or airborne laser scanning) systems became a dominant player in high-precision spatial data acquisition : to efficiently create DEM/DSM in the late 90s. With increasing point density, new systems are now able to : support object extraction, ...

  11. Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.

    PubMed

    Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan

    2018-06-05

    Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.

  12. Development of a point-kinetic verification scheme for nuclear reactor applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.

    In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less

  13. Detection and Classification of Pole-Like Objects from Mobile Mapping Data

    NASA Astrophysics Data System (ADS)

    Fukano, K.; Masuda, H.

    2015-08-01

    Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.

  14. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  15. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  16. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  17. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  18. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  19. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  20. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  1. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  2. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  3. The Role of Outcomes-Based National Qualifications in the Development of an Effective Vocational Education and Training System: The Case of England and Wales

    ERIC Educational Resources Information Center

    Oates, Tim

    2004-01-01

    This article analyses the increasingly diverse and sophisticated critique of "outcomes approaches" in vocational qualifications; critique which has now moved well beyond the early claims of reductivism and behaviourism. Avoiding a naive position on extraction of points of consensus, this article attempts to extract key issues which have…

  4. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  5. Using a focal-plane array to estimate antenna pointing errors

    NASA Technical Reports Server (NTRS)

    Zohar, S.; Vilnrotter, V. A.

    1991-01-01

    The use of extra collecting horns in the focal plane of an antenna as a means of determining the Direction of Arrival (DOA) of the signal impinging on it, provided it is within the antenna beam, is considered. Our analysis yields a relatively simple algorithm to extract the DOA from the horns' outputs. An algorithm which, in effect, measures the thermal noise of the horns' signals and determines its effect on the uncertainty of the extracted DOA parameters is developed. Both algorithms were implemented in software and tested in simulated data. Based on these tests, it is concluded that this is a viable approach to the DOA determination. Though the results obtained are of general applicability, the particular motivation for the present work is their application to the pointing of a mechanically deformed antenna. It is anticipated that the pointing algorithm developed for a deformed antenna could be obtained as a small perturbation of the algorithm developed for an undeformed antenna. In this context, it should be pointed out that, with a deformed antenna, the array of horns and its associated circuitry constitute the main part of the deformation-compensation system. In this case, the pointing system proposed may be viewed as an additional task carried out by the deformation-compensation hardware.

  6. Research on structural integration of thermodynamic system for double reheat coal-fired unit with CO2 capture

    NASA Astrophysics Data System (ADS)

    Wang, Lanjing; Shao, Wenjing; Wang, Zhiyue; Fu, Wenfeng; Zhao, Wensheng

    2018-02-01

    Taking the MEA chemical absorption carbon capture system with 85% of the carbon capture rate of a 660MW ultra-super critical unit as an example,this paper puts forward a new type of turbine which dedicated to supply steam to carbon capture system. The comparison of the thermal systems of the power plant under different steam supply schemes by using the EBSILON indicated optimal extraction scheme for Steam Extraction System in Carbon Capture System. The results show that the cycle heat efficiency of the unit introduced carbon capture turbine system is higher than that of the usual scheme without it. With the introduction of the carbon capture turbine, the scheme which extracted steam from high pressure cylinder’ s steam input point shows the highest cycle thermal efficiency. Its indexes are superior to other scheme, and more suitable for existing coal-fired power plant integrated post combustion carbon dioxide capture system.

  7. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    NASA Astrophysics Data System (ADS)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  8. [Extraction and recognition of attractors in three-dimensional Lorenz plot].

    PubMed

    Hu, Min; Jang, Chengfan; Wang, Suxia

    2018-02-01

    Lorenz plot (LP) method which gives a global view of long-time electrocardiogram signals, is an efficient simple visualization tool to analyze cardiac arrhythmias, and the morphologies and positions of the extracted attractors may reveal the underlying mechanisms of the onset and termination of arrhythmias. But automatic diagnosis is still impossible because it is lack of the method of extracting attractors by now. We presented here a methodology of attractor extraction and recognition based upon homogeneously statistical properties of the location parameters of scatter points in three dimensional LP (3DLP), which was constructed by three successive RR intervals as X , Y and Z axis in Cartesian coordinate system. Validation experiments were tested in a group of RR-interval time series and tags data with frequent unifocal premature complexes exported from a 24-hour Holter system. The results showed that this method had excellent effective not only on extraction of attractors, but also on automatic recognition of attractors by the location parameters such as the azimuth of the points peak frequency ( A PF ) of eccentric attractors once stereographic projection of 3DLP along the space diagonal. Besides, A PF was still a powerful index of differential diagnosis of atrial and ventricular extrasystole. Additional experiments proved that this method was also available on several other arrhythmias. Moreover, there were extremely relevant relationships between 3DLP and two dimensional LPs which indicate any conventional achievement of LPs could be implanted into 3DLP. It would have a broad application prospect to integrate this method into conventional long-time electrocardiogram monitoring and analysis system.

  9. Iris recognition using possibilistic fuzzy matching on local features.

    PubMed

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  10. Contour matching for a fish recognition and migration-monitoring system

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Schoenberger, Robert B.; Shiozawa, Dennis; Xu, Xiaoqian; Zhan, Pengcheng

    2004-12-01

    Fish migration is being monitored year round to provide valuable information for the study of behavioral responses of fish to environmental variations. However, currently all monitoring is done by human observers. An automatic fish recognition and migration monitoring system is more efficient and can provide more accurate data. Such a system includes automatic fish image acquisition, contour extraction, fish categorization, and data storage. Shape is a very important characteristic and shape analysis and shape matching are studied for fish recognition. Previous work focused on finding critical landmark points on fish shape using curvature function analysis. Fish recognition based on landmark points has shown satisfying results. However, the main difficulty of this approach is that landmark points sometimes cannot be located very accurately. Whole shape matching is used for fish recognition in this paper. Several shape descriptors, such as Fourier descriptors, polygon approximation and line segments, are tested. A power cepstrum technique has been developed in order to improve the categorization speed using contours represented in tangent space with normalized length. Design and integration including image acquisition, contour extraction and fish categorization are discussed in this paper. Fish categorization results based on shape analysis and shape matching are also included.

  11. Shape Effect of Electrochemical Chloride Extraction in Structural Reinforced Concrete Elements Using a New Cement-Based Anodic System

    PubMed Central

    Carmona, Jesús; Climent, Miguel-Ángel; Antón, Carlos; de Vera, Guillem; Garcés, Pedro

    2015-01-01

    This article shows the research carried out by the authors focused on how the shape of structural reinforced concrete elements treated with electrochemical chloride extraction can affect the efficiency of this process. Assuming the current use of different anode systems, the present study considers the comparison of results between conventional anodes based on Ti-RuO2 wire mesh and a cement-based anodic system such as a paste of graphite-cement. Reinforced concrete elements of a meter length were molded to serve as laboratory specimens, to closely represent authentic structural supports, with circular and rectangular sections. Results confirm almost equal performances for both types of anode systems when electrochemical chloride extraction is applied to isotropic structural elements. In the case of anisotropic ones, such as rectangular sections with no uniformly distributed rebar, differences in electrical flow density were detected during the treatment. Those differences were more extreme for Ti-RuO2 mesh anode system. This particular shape effect is evidenced by obtaining the efficiencies of electrochemical chloride extraction in different points of specimens.

  12. Alcohol based-deep eutectic solvent (DES) as an alternative green additive to increase rotenone yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Othman, Zetty Shafiqa; Hassan, Nur Hasyareeda; Zubairi, Saiful Irwan

    Deep eutectic solvents (DESs) are basically molten salts that interact by forming hydrogen bonds between two added components at a ratio where eutectic point reaches a melting point lower than that of each individual component. Their remarkable physicochemical properties (similar to ionic liquids) with remarkable green properties, low cost and easy handling make them a growing interest in many fields of research. Therefore, the objective of pursuing this study is to analyze the potential of alcohol-based DES as an extraction medium for rotenone extraction from Derris elliptica roots. DES was prepared by a combination of choline chloride, ChCl and 1,more » 4-butanediol at a ratio of 1/5. The structure of elucidation of DES was analyzed using FTIR, {sup 1}H-NMR and {sup 13}C-NMR. Normal soaking extraction (NSE) method was carried out for 14 hours using seven different types of solvent systems of (1) acetone; (2) methanol; (3) acetonitrile; (4) DES; (5) DES + methanol; (6) DES + acetonitrile; and (7) [BMIM] OTf + acetone. Next, the yield of rotenone, % (w/w), and its concentration (mg/ml) in dried roots were quantitatively determined by means of RP-HPLC. The results showed that a binary solvent system of [BMIM] OTf + acetone and DES + acetonitrile was the best solvent system combination as compared to other solvent systems. It contributed to the highest rotenone content of 0.84 ± 0.05% (w/w) (1.09 ± 0.06 mg/ml) and 0.84 ± 0.02% (w/w) (1.03 ± 0.01 mg/ml) after 14 hours of exhaustive extraction time. In conclusion, a combination of the DES with a selective organic solvent has been proven to have a similar potential and efficiency as of ILs in extracting bioactive constituents in the phytochemical extraction process.« less

  13. Combining active learning and semi-supervised learning techniques to extract protein interaction sentences.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2011-11-24

    Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.

  14. Use of Dimethyl Pimelimidate with Microfluidic System for Nucleic Acids Extraction without Electricity.

    PubMed

    Jin, Choong Eun; Lee, Tae Yoon; Koo, Bonhan; Choi, Kyung-Chul; Chang, Suhwan; Park, Se Yoon; Kim, Ji Yeun; Kim, Sung-Han; Shin, Yong

    2017-07-18

    The isolation of nucleic acids in the lab on a chip is crucial to achieve the maximal effectiveness of point-of-care testing for detection in clinical applications. Here, we report on the use of a simple and versatile single-channel microfluidic platform that combines dimethyl pimelimidate (DMP) for nucleic acids (both RNA and DNA) extraction without electricity using a thin-film system. The system is based on the adaption of DMP into nonchaotropic-based nucleic acids and the capture of reagents into a low-cost thin-film platform for use as a microfluidic total analysis system, which can be utilized for sample processing in clinical diagnostics. Moreover, we assessed the use of the DMP system for the extraction of nucleic acids from various samples, including mammalian cells, bacterial cells, and viruses from human disease, and we also confirmed that the quality and quantity of the nucleic acids extracted were sufficient to allow for the robust detection of biomarkers and/or pathogens in downstream analysis. Furthermore, this DMP system does not require any instruments and electricity, and has improved time efficiency, portability, and affordability. Thus, we believe that the DMP system may change the paradigm of sample processing in clinical diagnostics.

  15. A Bayesian framework for extracting human gait using strong prior knowledge.

    PubMed

    Zhou, Ziheng; Prügel-Bennett, Adam; Damper, Robert I

    2006-11-01

    Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.

  16. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    NASA Astrophysics Data System (ADS)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  17. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  18. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.

  19. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  20. Construction of Gallium Point at NMIJ

    NASA Astrophysics Data System (ADS)

    Widiatmo, J. V.; Saito, I.; Yamazawa, K.

    2017-03-01

    Two open-type gallium point cells were fabricated using ingots whose nominal purities are 7N. Measurement systems for the realization of the melting point of gallium using these cells were built. The melting point of gallium is repeatedly realized by means of the measurement systems for evaluating the repeatability. Measurements for evaluating the effect of hydrostatic pressure coming from the molten gallium existing during the melting process and the effect of gas pressure that fills the cell were also performed. Direct cell comparisons between those cells were conducted. This comparison was aimed to evaluate the consistency of each cell, especially related to the nominal purity. Direct cell comparison between the open-type and the sealed-type gallium point cell was also conducted. Chemical analysis was conducted using samples extracted from ingots used in both the newly built open-type gallium point cells, from which the effect of impurities in the ingot was evaluated.

  1. A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit

    PubMed Central

    Dang, Quoc Khanh; Chee, Youngjoon; Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    A virtual blind cane system for indoor application, including a camera, a line laser and an inertial measurement unit (IMU), is proposed in this paper. Working as a blind cane, the proposed system helps a blind person find the type of obstacle and the distance to it. The distance from the user to the obstacle is estimated by extracting the laser coordinate points on the obstacle, as well as tracking the system pointing angle. The paper provides a simple method to classify the obstacle’s type by analyzing the laser intersection histogram. Real experimental results are presented to show the validity and accuracy of the proposed system. PMID:26771618

  2. United States Air Force Environmental Restoration Program. Guidance on Soil Vapor Extraction Optimization

    DTIC Science & Technology

    2001-06-01

    Pump Exposed Capillary Fringe SVE System Pneumatic/ Hydraulic Fracturing Points Increased Advective Flow draw\\svehandbk1.cdr aee p1 4/5/01 022/736300...propagate further from the extraction well, increasing the advective flow zone round the well. Pneumatic and hydraulic fracturing are the primary methods...enhancing existing fractures and increasing the secondary fracture network. Hydraulic fracturing involves the injection of water or slurry into the

  3. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  4. High-power ultrasonic system for the enhancement of mass transfer in supercritical CO2 extraction processes

    NASA Astrophysics Data System (ADS)

    Riera, Enrique; Blanco, Alfonso; García, José; Benedito, José; Mulet, Antonio; Gallego-Juárez, Juan A.; Blasco, Miguel

    2010-01-01

    Oil is an important component of almonds and other vegetable substrates that can show an influence on human health. In this work the development and validation of an innovative, robust, stable, reliable and efficient ultrasonic system at pilot scale to assist supercritical CO2 extraction of oils from different substrates is presented. In the extraction procedure ultrasonic energy represents an efficient way of producing deep agitation enhancing mass transfer processes because of some mechanisms (radiation pressure, streaming, agitation, high amplitude vibrations, etc.). A previous work to this research pointed out the feasibility of integrating an ultrasonic field inside a supercritical extractor without losing a significant volume fraction. This pioneer method enabled to accelerate mass transfer and then, improving supercritical extraction times. To commercially develop the new procedure fulfilling industrial requirements, a new configuration device has been designed, implemented, tested and successfully validated for supercritical fluid extraction of oil from different vegetable substrates.

  5. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  6. Cloud point extraction thermospray flame quartz furnace atomic absorption spectrometry for determination of ultratrace cadmium in water and urine

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng

    2006-12-01

    A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.

  7. Accuracy assessment of airborne LIDAR data and automated extraction of features

    NASA Astrophysics Data System (ADS)

    Cetin, Ali Fuat

    Airborne LIDAR technology is becoming more widely used since it provides fast and dense irregularly spaced 3D point clouds. The coordinates produced as a result of calibration of the system are used for surface modeling and information extraction. In this research a new idea of LIDAR detectable targets is introduced. In the second part of this research, a new technique to delineate the edge of road pavements automatically using only LIDAR is presented. The accuracy of LIDAR data should be determined before exploitation for any information extraction to support a Geographic Information System (GIS) database. Until recently there was no definitive research to provide a methodology for common and practical assessment of both horizontal and vertical accuracy of LIDAR data for end users. The idea used in this research was to use targets of such a size and design so that the position of each target can be determined using the Least Squares Image Matching Technique. The technique used in this research can provide end users and data providers an easy way to evaluate the quality of the product, especially when there are accessible hard surfaces to install the targets. The results of the technique are determined to be in a reasonable range when the point spacing of the data is sufficient. To delineate the edge of pavements, trees and buildings are removed from the point cloud, and the road surfaces are segmented from the remaining terrain data. This is accomplished using the homogeneous nature of road surfaces in intensity and height. There are not many studies to delineate the edge of road pavement after the road surfaces are extracted. In this research, template matching techniques are used with criteria computed by Gray Level Co-occurrence Matrix (GLCM) properties, in order to locate seed pixels in the image. The seed pixels are then used for placement of the matched templates along the road. The accuracy of the delineated edge of pavement is determined by comparing the coordinates of reference points collected via photogrammetry with the coordinates of the nearest points along the delineated edge.

  8. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  9. New method of paired thyrotropin assay as a screening test for neonatal hypothyroidism. [/sup 125/I tracer technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyai, K.; Oura, T.; Kawashima, M.

    1978-11-01

    A simple and reliable method of paired TSH assay was developed and used in screening for neonatal primary hypothyroidism. In this method, a paired assay is first done. Equal parts of the extracts of dried blood spots on filter paper (9 mm diameter) from two infants 4 to 7 days old are combined and assayed for TSH by double antibody RIA. If the value obtained is over the cut-off point, the extracts are assayed separately for TSH in a second assay to identify the abnormal sample. Two systems, A and B, with different cut-off points were tested. On the basismore » of reference blood samples (serum levels of TSH, 80 ..mu..U/ml in system A and 40 ..mu..U/ml in system B), the cut-off point was selected as follows: upper 5 (A) or 4 (B) percentile in the paired assay and values of reference blood samples in the second individual assay. Four cases (2 in A and 2 in B) of neonatal primary hypothyroidism were found among 25 infants (23 in A and 2 in B) who were recalled from a general population of 41,400 infants (24,200 in A and 17,200 in B) by 22,700 assays. This paired TSH assay system saves labor and expense for screening neonatal hypothyroidism.« less

  10. Method for contour extraction for object representation

    DOEpatents

    Skourikhine, Alexei N.; Prasad, Lakshman

    2005-08-30

    Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.

  11. Object extraction in photogrammetric computer vision

    NASA Astrophysics Data System (ADS)

    Mayer, Helmut

    This paper discusses state and promising directions of automated object extraction in photogrammetric computer vision considering also practical aspects arising for digital photogrammetric workstations (DPW). A review of the state of the art shows that there are only few practically successful systems on the market. Therefore, important issues for a practical success of automated object extraction are identified. A sound and most important powerful theoretical background is the basis. Here, we particularly point to statistical modeling. Testing makes clear which of the approaches are suited best and how useful they are for praxis. A key for commercial success of a practical system is efficient user interaction. As the means for data acquisition are changing, new promising application areas such as extremely detailed three-dimensional (3D) urban models for virtual television or mission rehearsal evolve.

  12. Preparation and use of Xenopus egg extracts to study DNA replication and chromatin associated proteins

    PubMed Central

    Gillespie, Peter J.; Gambus, Agnieszka; Blow, J. Julian

    2012-01-01

    The use of cell-free extracts prepared from eggs of the South African clawed toad, Xenopus laevis, has led to many important discoveries in cell cycle research. These egg extracts recapitulate the key nuclear transitions of the eukaryotic cell cycle in vitro under apparently the same controls that exist in vivo. DNA added to the extract is first assembled into a nucleus and is then efficiently replicated. Progression of the extract into mitosis then allows the separation of paired sister chromatids. The Xenopus cell-free system is therefore uniquely suited to the study of the mechanisms, dynamics and integration of cell cycle regulated processes at a biochemical level. In this article we describe methods currently in use in our laboratory for the preparation of Xenopus egg extracts and demembranated sperm nuclei for the study of DNA replication in vitro. We also detail how DNA replication can be quantified in this system. In addition, we describe methods for isolating chromatin and chromatin-bound protein complexes from egg extracts. These recently developed and revised techniques provide a practical starting point for investigating the function of proteins involved in DNA replication. PMID:22521908

  13. Extracting cross sections and water levels of vegetated ditches from LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Roelens, Jennifer; Dondeyne, Stefaan; Van Orshoven, Jos; Diels, Jan

    2016-12-01

    The hydrologic response of a catchment is sensitive to the morphology of the drainage network. Dimensions of bigger channels are usually well known, however, geometrical data for man-made ditches is often missing as there are many and small. Aerial LiDAR data offers the possibility to extract these small geometrical features. Analysing the three-dimensional point clouds directly will maintain the highest degree of information. A longitudinal and cross-sectional buffer were used to extract the cross-sectional profile points from the LiDAR point cloud. The profile was represented by spline functions fitted through the minimum envelop of the extracted points. The cross-sectional ditch profiles were classified for the presence of water and vegetation based on the normalized difference water index and the spatial characteristics of the points along the profile. The normalized difference water index was created using the RGB and intensity data coupled to the LiDAR points. The mean vertical deviation of 0.14 m found between the extracted and reference cross sections could mainly be attributed to the occurrence of water and partly to vegetation on the banks. In contrast to the cross-sectional area, the extracted width was not influenced by the environment (coefficient of determination R2 = 0.87). Water and vegetation influenced the extracted ditch characteristics, but the proposed method is still robust and therefore facilitates input data acquisition and improves accuracy of spatially explicit hydrological models.

  14. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.

  15. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  16. Cloud point extraction: an alternative to traditional liquid-liquid extraction for lanthanides(III) separation.

    PubMed

    Favre-Réguillon, Alain; Draye, Micheline; Lebuzit, Gérard; Thomas, Sylvie; Foos, Jacques; Cote, Gérard; Guy, Alain

    2004-06-17

    Cloud point extraction (CPE) was used to extract and separate lanthanum(III) and gadolinium(III) nitrate from an aqueous solution. The methodology used is based on the formation of lanthanide(III)-8-hydroxyquinoline (8-HQ) complexes soluble in a micellar phase of non-ionic surfactant. The lanthanide(III) complexes are then extracted into the surfactant-rich phase at a temperature above the cloud point temperature (CPT). The structure of the non-ionic surfactant, and the chelating agent-metal molar ratio are identified as factors determining the extraction efficiency and selectivity. In an aqueous solution containing equimolar concentrations of La(III) and Gd(III), extraction efficiency for Gd(III) can reach 96% with a Gd(III)/La(III) selectivity higher than 30 using Triton X-114. Under those conditions, a Gd(III) decontamination factor of 50 is obtained.

  17. ADS's Dexter Data Extraction Applet

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Accomazzi, A.; Eichhorn, G.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.

    The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template. This contribution both describes the operation of Dexter from a user's point of view and discusses some of the architectural issues we faced during implementation.

  18. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  19. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  20. Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis

    PubMed Central

    Lee, Jonguk; Choi, Heesu; Park, Daihee; Chung, Yongwha; Kim, Hee-Young; Yoon, Sukhan

    2016-01-01

    Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods. PMID:27092509

  1. Nonstarch polysaccharides in wheat flour wire-cut cookie making.

    PubMed

    Guttieri, Mary J; Souza, Edward J; Sneller, Clay

    2008-11-26

    Nonstarch polysaccharides in wheat flour have significant capacity to affect the processing quality of wheat flour dough and the finished quality of wheat flour products. Most research has focused on the effects of arabinoxylans (AX) in bread making. This study found that water-extractable AX and arabinogalactan peptides can predict variation in pastry wheat quality as captured by the wire-cut cookie model system. The sum of water-extractable AX plus arabinogalactan was highly predictive of cookie spread factor. The combination of cookie spread factor and the ratio of water-extractable arabinose to xylose predicted peak force of the three-point bend test of cookie texture.

  2. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  3. Optimisation of phenolic extraction from Averrhoa carambola pomace by response surface methodology and its microencapsulation by spray and freeze drying.

    PubMed

    Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata

    2015-03-15

    Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. [Preliminary application of scripting in RayStation TPS system].

    PubMed

    Zhang, Jianying; Sun, Jing; Wang, Yun

    2013-07-01

    Discussing the basic application of scripting in RayStation TPS system. On the RayStation 3.0 Platform, the programming methods and the points should be considered during basic scripting application were explored with the help of utility scripts. The typical planning problems in the field of beam arrangement and plan outputting were used as examples by ironprthon language. The necessary properties and the functions of patient object for script writing can be extracted from RayStation system. With the help of NET controls, planning functions such as the interactive parameter input, treatment planning control and the extract of the plan have been realized by scripts. With the help of demo scripts, scripts can be developed in RayStation, as well as the system performance can be upgraded.

  5. Csf Based Non-Ground Points Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Shen, A.; Zhang, W.; Shi, H.

    2017-09-01

    Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points

  6. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  7. Ultrasound-directed robotic system for thermal ablation of liver tumors: a preliminary report

    NASA Astrophysics Data System (ADS)

    Zheng, Jian; Tian, Jie; Dai, Yakang; Zhang, Xing; Dong, Di; Xu, Min

    2010-03-01

    Thermal ablation has been proved safe and effective as the treatment for liver tumors that are not suitable for resection. Currently, manually performed thermal ablation is greatly dependent on the surgeon's acupuncture manipulation against hand tremor. Besides that, inaccurate or inappropriate placement of the applicator will also directly decrease the final treatment effect. In order to reduce the influence of hand tremor, and provide an accurate and appropriate guidance for a better treatment, we develop an ultrasound-directed robotic system for thermal ablation of liver tumors. In this paper, we will give a brief preliminary report of our system. Especially, three innovative techniques are proposed to solve the critical problems in our system: accurate ultrasound calibration when met with artifacts, realtime reconstruction with visualization using Graphic Processing Unit (GPU) acceleration and 2D-3D ultrasound image registration. To reduce the error of point extraction with artifacts, we propose a novel point extraction method by minimizing an error function which is defined based on the geometric property of our N-fiducial phantom. Then realtime reconstruction with visualization using GPU acceleration is provided for fast 3D ultrasound volume acquisition with dynamic display of reconstruction progress. After that, coarse 2D-3D ultrasound image registration is performed based on landmark points correspondences, followed by accurate 2D-3D ultrasound image registration based on Euclidean distance transform (EDT). The effectiveness of our proposed techniques is demonstrated in phantom experiments.

  8. Studies on the antioxidant activities of natural vanilla extract and its constituent compounds through in vitro models.

    PubMed

    Shyamala, B N; Naidu, M Madhava; Sulochanamma, G; Srinivas, P

    2007-09-19

    Vanilla extract was prepared by extraction of cured vanilla beans with aqueous ethyl alcohol (60%). The extract was profiled by HPLC, wherein major compounds, viz., vanillic acid, 4-hydroxybenzyl alcohol, 4-hydroxy-3-methoxybenzyl alcohol, 4-hydroxybenzaldehyde and vanillin, could be identified and separated. Extract and pure standard compounds were screened for antioxidant activity using beta-carotene-linoleate and DPPH in vitro model systems. At a concentration of 200 ppm, the extract showed 26% and 43% of antioxidant activity by beta-carotene-linoleate and DPPH methods, respectively, in comparison to corresponding values of 93% and 92% for BHA. Interestingly, 4-hydroxy-3-methoxybenzyl alcohol and 4-hydroxybenzyl alcohol exhibited antioxidant activity of 65% and 45% by beta-carotene-linoleate method and 90% and 50% by DPPH methods, respectively. In contrast, pure vanillin exhibited much lower antioxidant activity. The present study points toward the potential use of vanilla extract components as antioxidants for food preservation and in health supplements as nutraceuticals.

  9. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  10. Separation and recycling of nanoparticles using cloud point extraction with non-ionic surfactant mixtures.

    PubMed

    Nazar, Muhammad Faizan; Shah, Syed Sakhawat; Eastoe, Julian; Khan, Asad Muhammad; Shah, Afzal

    2011-11-15

    A viable cost-effective approach employing mixtures of non-ionic surfactants Triton X-114/Triton X-100 (TX-114/TX-100), and subsequent cloud point extraction (CPE), has been utilized to concentrate and recycle inorganic nanoparticles (NPs) in aqueous media. Gold Au- and palladium Pd-NPs have been pre-synthesized in aqueous phases and stabilized by sodium 2-mercaptoethanesulfonate (MES) ligands, then dispersed in aqueous non-ionic surfactant mixtures. Heating the NP-micellar systems induced cloud point phase separations, resulting in concentration of the NPs in lower phases after the transition. For the Au-NPs UV/vis absorption has been used to quantify the recovery and recycle efficiency after five repeated CPE cycles. Transmission electron microscopy (TEM) was used to investigate NP size, shape, and stability. The results showed that NPs are preserved after the recovery processes, but highlight a potential limitation, in that further particle growth can occur in the condensed phases. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Spectrophotometric determination of paracetamol in urine with tetrahydroxycalix[4]arene as a coupling reagent and preconcentration with triton X-114 using cloud point extraction.

    PubMed

    Filik, Hayati; Sener, Izzet; Cekiç, Sema Demirci; Kiliç, Emine; Apak, Reşat

    2006-06-01

    In the present paper, conventional spectrophotometry in conjunction with cloud point extraction-preconcentration were investigated as alternative methods for paracetamol (PCT) assay in urine samples. Cloud point extraction (CPE) was employed for the preconcentration of p-aminophenol (PAP) prior to spectrophotometric determination using the non-ionic surfactant Triton X-114 (TX-114) as an extractant. The developed methods were based on acidic hydrolysis of PCT to PAP, which reacted at room temperature with 25,26,27,28-tetrahydroxycalix[4]arene (CAL4) in the presence of an oxidant (KIO(4)) to form an blue colored product. The PAP-CAL4 blue dye formed was subsequently entrapped in the surfactant micelles of Triton X-114. Cloud point phase separation with the aid of Triton X-114 induced by addition of Na(2)SO(4) solution was performed at room temperature as an advantage over other CPE assays requiring elevated temperatures. The 580 nm-absorbance maximum of the formed product was shifted bathochromically to 590 nm with CPE. The working range of 1.5-12 microg ml(-1) achieved by conventional spectrophotometry was reduced down to 0.14-1.5 microg ml(-1) with cloud point extraction, which was lower than those of most literature flow-through assays that also suffer from nonspecific absorption in the UV region. By preconcentrating 10 ml sample solution, a detection limit as low as 40.0 ng ml(-1) was obtained after a single-step extraction, achieving a preconcentration factor of 10. The stoichiometric composition of the dye was found to be 1 : 4 (PAP : CAL4). The impact of a number of parameters such as concentrations of CAL4, KIO(4), Triton X-100 (TX-100), and TX-114, extraction temperature, time periods for incubation and centrifugation, and sample volume were investigated in detail. The determination of PAP in the presence of paracetamol in micellar systems under these conditions is limited. The established procedures were successfully adopted for the determination of PCT in urine samples. Since the drug is rapidly absorbed and excreted largely in urine and its high doses have been associated with lethal hepatic necrosis and renal failure, development of a rapid, sensitive and selective assay of PCT is of vital importance for fast urinary screening and antidote administration before applying more sophisticated, but costly and laborious hyphenated instrumental techniques of HPLC-SPE-NMR-MS.

  12. Optimisation of gelatin extraction from Unicorn leatherjacket (Aluterus monoceros) skin waste: response surface approach.

    PubMed

    Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu

    2015-02-01

    Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.

  13. Micelle-mediated extraction and cloud point preconcentration for the analysis of aesculin and aesculetin in Cortex fraxini by HPLC.

    PubMed

    Shi, Zhihong; Zhu, Xiaomin; Zhang, Hongyi

    2007-08-15

    In this paper, a micelle-mediated extraction and cloud point preconcentration method was developed for the determination of less hydrophobic compounds aesculin and aesculetin in Cortex fraxini by HPLC. Non-ionic surfactant oligoethylene glycol monoalkyl ether (Genapol X-080) was employed as the extraction solvent. Various experimental conditions were investigated to optimize the extraction process. Under optimum conditions, i.e. 5% Genapol X-080 (w/v), pH 1.0, liquid/solid ratio of 400:1 (ml/g), ultrasonic-assisted extraction for 30 min, the extraction yield reached the highest value. For the preconcentration of aesculin and aesculetin by cloud point extraction (CPE), the solution was incubated in a thermostatic water bath at 55 degrees C for 30 min, and 20% NaCl (w/v) was added to the solution to facilitate the phase separation and increase the preconcentration factor during the CPE process. Compared with methanol, which was used in Chinese Pharmacopoeia (2005 edition) for the extraction of C. fraxini, the extraction efficiency of 5% Genapol X-080 reached higher value.

  14. Urban Terrain Modeling for Augmented Reality Applications

    DTIC Science & Technology

    2001-01-01

    pointing ( Maybank -92). Almost all such systems are designed to extract the geometry of buildings and to texture these to provide models that can be... Maybank , S. and Faugeras, O. (1992). A Theory of Self-Calibration of a Moving Camera, International Journal of Computer Vision, 8(2):123-151

  15. LiDAR Vegetation Investigation and Signature Analysis System (LVISA)

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia

    2015-04-01

    Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.

  16. Establishment of Application Guidance for OTC non-Kampo Crude Drug Extract Products in Japan

    PubMed Central

    Somekawa, Layla; Maegawa, Hikoichiro; Tsukada, Shinsuke; Nakamura, Takatoshi

    2017-01-01

    Currently, there are no standardized regulatory systems for herbal medicinal products worldwide. Communication and sharing of knowledge between different regulatory systems will lead to mutual understanding and might help identify topics which deserve further discussion in the establishment of common standards. Regulatory information on traditional herbal medicinal products in Japan is updated by the establishment of Application Guidance for over-the-counter non-Kampo Crude Drug Extract Products. We would like to report on updated regulatory information on the new Application Guidance. Methods for comparison of Crude Drug Extract formulation and standard decoction and criteria for application and the key points to consider for each criterion are indicated in the guidance. Establishment of the guidance contributes to improvements in public health. We hope that the regulatory information about traditional herbal medicinal products in Japan will be of contribution to tackling the challenging task of regulating traditional herbal products worldwide. PMID:28894633

  17. Spectral analysis of stellar light curves by means of neural networks

    NASA Astrophysics Data System (ADS)

    Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.

    1999-06-01

    Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.

  18. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-03-12

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.

  19. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  20. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  1. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  2. Use of Assisted Photogrammetry for Indoor and Outdoor Navigation Purposes

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Cazzaniga, N. E.; Pinto, L.

    2015-05-01

    Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users' location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error.

  3. Comparative study of adaptive controller using MIT rules and Lyapunov method for MPPT standalone PV systems

    NASA Astrophysics Data System (ADS)

    Tariba, N.; Bouknadel, A.; Haddou, A.; Ikken, N.; Omari, Hafsa El; Omari, Hamid El

    2017-01-01

    The Photovoltaic Generator have a nonlinear characteristic function relating the intensity at the voltage I = f (U) and depend on the variation of solar irradiation and temperature, In addition, its point of operation depends directly on the load that it supplies. To fix this drawback, and to extract the maximum power available to the terminal of the generator, an adaptation stage is introduced between the generator and the load to couple the two elements as perfectly as possible. The adaptation stage is associated with a command called MPPT MPPT (Maximum Power Point Tracker) whose is used to force the PVG to operate at the MPP (Maximum Power Point) under variation of climatic conditions and load variation. This paper presents a comparative study between the adaptive controller for PV Systems using MIT rules and Lyapunov method to regulate the PV voltage. The Incremental Conductance (IC) algorithm is used to extract the maximum power from the PVG by calculating the voltage Vref, and the adaptive controller is used to regulate and track quickly the PV voltage. The two methods of the adaptive controller will be compared to prove their performance by using the PSIM tools and experimental test, and the mathematical model of step-up with PVG model will be presented.

  4. Cloud point extraction and determination of trace trichlorfon by high performance liquid chromatography with ultraviolet-detection based on its catalytic effect on benzidine oxidizing.

    PubMed

    Zhu, Hai-Zhen; Liu, Wei; Mao, Jian-Wei; Yang, Ming-Min

    2008-04-28

    4-Amino-4'-nitrobiphenyl, which is formed by catalytic effect of trichlorfon on sodium perborate oxidizing benzidine, is extracted with a cloud point extraction method and then detected using a high performance liquid chromatography with ultraviolet detection (HPLC-UV). Under the optimum experimental conditions, there was a linear relationship between trichlorfon in the concentration range of 0.01-0.2 mgL(-1) and the peak areas of 4-amino-4'-nitrobiphenyl (r=0.996). Limit of detection was 2.0 microgL(-1), recoveries of spiked water and cabbage samples ranged between 95.4-103 and 85.2-91.2%, respectively. It was proved that the cloud point extraction (CPE) method was simple, cheap, and environment friendly than extraction with organic solvents and had more effective extraction yield.

  5. Applications of UAS-SfM for coastal vulnerability assessment: Geomorphic feature extraction and land cover classification from fine-scale elevation and imagery data

    NASA Astrophysics Data System (ADS)

    Sturdivant, E. J.; Lentz, E. E.; Thieler, E. R.; Remsen, D.; Miner, S.

    2016-12-01

    Characterizing the vulnerability of coastal systems to storm events, chronic change and sea-level rise can be improved with high-resolution data that capture timely snapshots of biogeomorphology. Imagery acquired with unmanned aerial systems (UAS) coupled with structure from motion (SfM) photogrammetry can produce high-resolution topographic and visual reflectance datasets that rival or exceed lidar and orthoimagery. Here we compare SfM-derived data to lidar and visual imagery for their utility in a) geomorphic feature extraction and b) land cover classification for coastal habitat assessment. At a beach and wetland site on Cape Cod, Massachusetts, we used UAS to capture photographs over a 15-hectare coastal area with a resulting pixel resolution of 2.5 cm. We used standard SfM processing in Agisoft PhotoScan to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM). The SfM-derived products have a horizontal uncertainty of +/- 2.8 cm. Using the point cloud in an extraction routine developed for lidar data, we determined the position of shorelines, dune crests, and dune toes. We used the output imagery and DEM to map land cover with a pixel-based supervised classification. The dense and highly precise SfM point cloud enabled extraction of geomorphic features with greater detail than with lidar. The feature positions are reported with near-continuous coverage and sub-meter accuracy. The orthomosaic image produced with SfM provides visual reflectance with higher resolution than those available from aerial flight surveys, which enables visual identification of small features and thus aids the training and validation of the automated classification. We find that the high-resolution and correspondingly high density of UAS data requires some simple modifications to existing measurement techniques and processing workflows, and that the types of data and the quality provided is equivalent to, and in some cases surpasses, that of data collected using other methods.

  6. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  7. Ultracompact/ultralow power electron cyclotron resonance ion source for multipurpose applications.

    PubMed

    Sortais, P; Lamy, T; Médard, J; Angot, J; Latrasse, L; Thuillier, T

    2010-02-01

    In order to drastically reduce the power consumption of a microwave ion source, we have studied some specific discharge cavity geometries in order to reduce the operating point below 1 W of microwave power (at 2.45 GHz). We show that it is possible to drive an electron cyclotron resonance ion source with a transmitter technology similar to those used for cellular phones. By the reduction in the size and of the required microwave power, we have developed a new type of ultralow cost ion sources. This microwave discharge system (called COMIC, for COmpact MIcrowave and Coaxial) can be used as a source of light, plasma or ions. We will show geometries of conductive cavities where it is possible, in a 20 mm diameter chamber, to reduce the ignition of the plasma below 100 mW and define typical operating points around 5 W. Inside a simple vacuum chamber it is easy to place the source and its extraction system anywhere and fully under vacuum. In that case, current densities from 0.1 to 10 mA/cm(2) (Ar, extraction 4 mm, 1 mAe, 20 kV) have been observed. Preliminary measurements and calculations show the possibility, with a two electrodes system, to extract beams within a low emittance. The first application for these ion sources is the ion injection for charge breeding, surface analyzing system and surface treatment. For this purpose, a very small extraction hole is used (typically 3/10 mm for a 3 microA extracted current with 2 W of HF power). Mass spectrum and emittance measurements will be presented. In these conditions, values down to 1 pi mm mrad at 15 kV (1sigma) are observed, thus very close to the ones currently observed for a surface ionization source. A major interest of this approach is the possibility to connect together several COMIC devices. We will introduce some new on-going developments such as sources for high voltage implantation platforms, fully quartz radioactive ion source at ISOLDE or large plasma generators for plasma immersion, broad or ribbon beams generation.

  8. Substitution of carcinogenic solvent dichloromethane for the extraction of volatile compounds in a fat-free model food system.

    PubMed

    Cayot, Nathalie; Lafarge, Céline; Bou-Maroun, Elias; Cayot, Philippe

    2016-07-22

    Dichloromethane is known as a very efficient solvent, but, as other halogenated solvents, is recognized as a hazardous product (CMR substance). The objective of the present work is to propose substitution solvent for the extraction of volatile compounds. The most important physico-chemical parameters in the choice of an appropriate extraction solvent of volatile compounds are reviewed. Various solvents are selected on this basis and on their hazard characteristics. The selected solvents, safer than dichloromethane, are compared using the extraction efficiency of volatile compounds from a model food product able to interact with volatile compounds. Volatile compounds with different hydrophobicity are used. High extraction yields were positively correlated with high boiling points and high Log Kow values of volatile compounds. Mixtures of solvents such as azeotrope propan-2-one/cyclopentane, azeotrope ethyl acetate/ethanol, and mixture ethyl acetate/ethanol (3:1, v/v) gave higher extraction yields than those obtained with dichloromethane. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  10. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  11. Iron Compounds and the Color of Soils in the Sakhalin Island

    NASA Astrophysics Data System (ADS)

    Vodyanitskii, Yu. N.; Kirillova, N. P.; Manakhov, D. V.; Karpukhin, M. M.

    2018-02-01

    Numerical parameters of soil color were studied according to the CIE-L*a*b color system before and after the Tamm's and Mehra-Jackson's treatments; we also determined the total Fe content in the samples from the main genetic horizons of the alluvial gray-humus soil, two profiles of burozems, and two profiles of podzols in the Sakhalin Island. In the analyzed samples, the numerical color parameters L* (lightness), a* (redness) and b* (yellowness) are found to vary within 46-73, 3-11, and 8-28, respectively. A linear relationship is revealed between the numerical values of a* parameters and Fe content in the Mehra-Jackson extracts; the regression equations are derived with the determination coefficients ( R 2): 0.49 (typical burozem), 0.79 (podzolized burozem), 0.96 (shallow-podzolic mucky podzol), 0.98 (gray-humus gley alluvial soil). For the surface-podzolic mucky podzol contaminated with petroleum hydrocarbons, R 2 was equal to only 0.03. In the gray humus (AY) and structural-metamorphic (BM) horizons of the studied soils, a* and b* parameters decrease after their treatment with the Tamm's reagent by 2 points on average. After the Mehra-Jackson treatment, the a* parameter decreased by 6 (AY) and 8 (BM) points; whereas b* parameter, by 10 and 15 points, respectively. In the E horizons of podzols, the Tamm's treatment increased a* and b* parameters by 1 point; whereas the Mehra-Jackson's treatment decreased these parameters by only 1 and 3 points, respectively. The redness (a*) decreased maximally in the lower gley horizon of the alluvial gray humus soil, i.e., by 6 (in the Tamm's extract) and 10 points (in the Mehra-Jackson's) extract. Yellowness (b*) decreased by 12 and 17 points, respectively. The revealed color specifics in the untreated samples and the color transformation under the impact of reagents in the studied soils and horizons may serve as an additional parameter that characterizes quantitatively the object of investigation in the reference databases.

  12. Changes in PGE2 signaling after submandibulectomy alter post-tooth extraction socket healing.

    PubMed

    Mohn, Claudia Ester; Troncoso, Gastón Rodolfo; Bozzini, Clarisa; Conti, María Inés; Fernandez Solari, Javier; Elverdin, Juan Carlos

    2018-03-10

    Saliva is very important to oral health, and a salivary deficit has been shown to bring serious problems to oral health. There is scant information about the mechanisms through which salivary glands participate in post-tooth extraction socket healing. Therefore, the aim of the present study was to investigate the effect of submandibulectomy (SMx), consisting of the ablation of submandibular and sublingual glands (SMG and SLG, respectively), on PGE 2 signaling and other bone regulatory molecules, such as OPG and RANKL, involved in tooth extraction socket healing. Male Wistar rats, 70 g body weight, were assigned to an experimental (subjected to SMx) or a control group (sham operated). One week later, the animals in both groups underwent bilateral extraction of the first mandibular molars. The effect of SMx on different stages of socket healing after tooth extraction (7, 14, and 30 days) was studied by evaluating some parameters of inflammation, including PGE 2 and its receptors, and of bone metabolism, as well as by performing bone biomechanical studies. SMx increased TNFα and PGE 2 content as well as cyclooxygenase-II (COX-II) expression in tooth socket tissue at almost all the studied time points. SMx also had an effect on mRNA expression of PGE 2 receptors at the different time points, but did not significantly alter osteoprotegerin (OPG) and RANKL mRNA expression at any of the studied time points. In addition, an increase in bone mass density was observed in SMx rats compared with matched controls, and the structural and mechanical bone properties of the mandibular socket bone were also affected by SMx. Our results suggest that the SMG/SLG complex regulates cellular activation and differentiation by modulating the production of molecules intervening in tooth extraction socket repair, including the PGE 2 signaling system, which would therefore account for the higher density and resistance of the newly formed bone in SMx rat. © 2018 by the Wound Healing Society.

  13. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  14. Fingerprint recognition system by use of graph matching

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  15. Automatic extraction of protein point mutations using a graph bigram association.

    PubMed

    Lee, Lawrence C; Horn, Florence; Cohen, Fred E

    2007-02-02

    Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram), that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR), tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76), protein tyrosine kinases (0.72 versus 0.69), and ion channel transporters (0.76 versus 0.74). Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.

  16. Alignment and bit extraction for secure fingerprint biometrics

    NASA Astrophysics Data System (ADS)

    Nagar, A.; Rane, S.; Vetro, A.

    2010-01-01

    Security of biometric templates stored in a system is important because a stolen template can compromise system security as well as user privacy. Therefore, a number of secure biometrics schemes have been proposed that facilitate matching of feature templates without the need for a stored biometric sample. However, most of these schemes suffer from poor matching performance owing to the difficulty of designing biometric features that remain robust over repeated biometric measurements. This paper describes a scheme to extract binary features from fingerprints using minutia points and fingerprint ridges. The features are amenable to direct matching based on binary Hamming distance, but are especially suitable for use in secure biometric cryptosystems that use standard error correcting codes. Given all binary features, a method for retaining only the most discriminable features is presented which improves the Genuine Accept Rate (GAR) from 82% to 90% at a False Accept Rate (FAR) of 0.1% on a well-known public database. Additionally, incorporating singular points such as a core or delta feature is shown to improve the matching tradeoff.

  17. Communication Needs Assessment for Distributed Turbine Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Behbahani, Alireza R.

    2008-01-01

    Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.

  18. Classification of building infrastructure and automatic building footprint delineation using airborne laser swath mapping data

    NASA Astrophysics Data System (ADS)

    Caceres, Jhon

    Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin image analysis for obtaining an initial classification, an automatic approach for delineating accurate building footprints is presented. The physical fact that laser pulses that happen to strike building edges can produce very different 1st and last return elevations has been long recognized. However, in older generation ALSM systems (<50 kHz pulse rates) such points were too few and far between to delineate building footprints precisely. Furthermore, without the robust separation of nearby trees and vegetation from the buildings, simply extracting ALSM shots where the elevation of the first return was much higher than the elevation of the last return, was not a reliable means of identifying building footprints. However, with the advent of ALSM systems with pulse rates in excess of 100 kHz, and by using spin-imaged based segmentation, it is now possible to extract building edges from the point cloud. A refined classification resulting from incorporating "on-edge" information is developed for obtaining quadrangular footprints. The footprint fitting process involves line generalization, least squares-based clustering and dominant points finding for segmenting individual building edges. In addition, an algorithm for fitting complex footprints using the segmented edges and data inside footprints is also proposed.

  19. Can cloud point-based enrichment, preservation, and detection methods help to bridge gaps in aquatic nanometrology?

    PubMed

    Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan

    2016-11-01

    Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.

  20. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  1. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  2. Point Cloud Analysis for Uav-Borne Laser Scanning with Horizontally and Vertically Oriented Line Scanners - Concept and First Results

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.

    2017-08-01

    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  3. CHARACTERIZATION OF POLED SINGLE-LAYER PZT FOR PIEZO STACK IN FUEL INJECTION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Matsunaga, Tadashi; Lin, Hua-Tay

    2010-01-01

    Poled single-layer PZT has been characterized in as-extracted and as-received states. PZT plate specimens in the former were extracted from a stack. Flexure strength of PZT was evaluated by using ball-on-ring and 4-point bend tests. Fractography showed that intergranular fractures dominated the fracture surface and that volume pores were the primary strength-limiting flaws. The electric field effect was investigated by testing the PZT in open circuit and coercive field levels. An asymmetrical response on the biaxial flexure strength with respect to the electric field direction was observed. These experimental results will assist reliability design of the piezo stack that ismore » being considered in fuel injection system.« less

  4. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data

    PubMed Central

    Qin, Xinyan; Wu, Gongping; Fan, Fei

    2018-01-01

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560

  5. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.

    PubMed

    Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui

    2018-04-22

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.

  6. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun

    2017-04-01

    In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.

  7. Application of a Terrestrial LIDAR System for Elevation Mapping in Terra Nova Bay, Antarctica.

    PubMed

    Cho, Hyoungsig; Hong, Seunghwan; Kim, Sangmin; Park, Hyokeun; Park, Ilsuk; Sohn, Hong-Gyoo

    2015-09-16

    A terrestrial Light Detection and Ranging (LIDAR) system has high productivity and accuracy for topographic mapping, but the harsh conditions of Antarctica make LIDAR operation difficult. Low temperatures cause malfunctioning of the LIDAR system, and unpredictable strong winds can deteriorate data quality by irregularly shaking co-registration targets. For stable and efficient LIDAR operation in Antarctica, this study proposes and demonstrates the following practical solutions: (1) a lagging cover with a heating pack to maintain the temperature of the terrestrial LIDAR system; (2) co-registration using square planar targets and two-step point-merging methods based on extracted feature points and the Iterative Closest Point (ICP) algorithm; and (3) a georeferencing module consisting of an artificial target and a Global Navigation Satellite System (GNSS) receiver. The solutions were used to produce a topographic map for construction of the Jang Bogo Research Station in Terra Nova Bay, Antarctica. Co-registration and georeferencing precision reached 5 and 45 mm, respectively, and the accuracy of the Digital Elevation Model (DEM) generated from the LIDAR scanning data was ±27.7 cm.

  8. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-04

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Liquid-liquid equilibria for water + ethanol + 2-methylpropyl ethanoate and water + ethanol + 1,2-dibromoethane at 298. 15 K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solimo, H.N.; Barnes de Arreguez, N.G.

    1994-01-01

    Liquid-liquid equilibrium, distribution coefficients, and selectivities of the systems water + ethanol + 2-methylpropyl ethanoate or + 1,2-dibromoethane have been determined at 298.15 K in order to evaluate their suitability in preferentially extracting ethanol from aqueous solution. Tie-line data were satisfactorily correlated by the Othmer and Tobias method, and the plait point coordinates for the two systems were estimated. The experimental data was compared with the values calculated by the NRTL and UNIQUAC models. The water + ethanol + 2-methylpropyl ethanoate system was also compared with the values predicted by the UNIFAC model. Poor qualitative agreement was obtained with thesemore » models. From the experimental results, they can conclude that both solvents are inappropriate for ethanol extraction processes from aqueous solutions.« less

  10. Personal authentication using hand vein triangulation and knuckle shape.

    PubMed

    Kumar, Ajay; Prathyusha, K Venkata

    2009-09-01

    This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.

  11. Inertial navigation sensor integrated obstacle detection system

    NASA Technical Reports Server (NTRS)

    Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)

    1992-01-01

    A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.

  12. Application of a Two Camera Video Imaging System to Three-Dimensional Vortex Tracking in the 80- by 120-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1993-01-01

    A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.

  13. Effect of increasing doses of saw palmetto extract on lower urinary tract symptoms: a randomized trial.

    PubMed

    Barry, Michael J; Meleth, Sreelatha; Lee, Jeannette Y; Kreder, Karl J; Avins, Andrew L; Nickel, J Curtis; Roehrborn, Claus G; Crawford, E David; Foster, Harris E; Kaplan, Steven A; McCullough, Andrew; Andriole, Gerald L; Naslund, Michael J; Williams, O Dale; Kusek, John W; Meyers, Catherine M; Betz, Joseph M; Cantor, Alan; McVary, Kevin T

    2011-09-28

    Saw palmetto fruit extracts are widely used for treating lower urinary tract symptoms attributed to benign prostatic hyperplasia (BPH); however, recent clinical trials have questioned their efficacy, at least at standard doses (320 mg/d). To determine the effect of saw palmetto extract (Serenoa repens, from saw palmetto berries) at up to 3 times the standard dose on lower urinary tract symptoms attributed to BPH. A double-blind, multicenter, placebo-controlled randomized trial at 11 North American clinical sites conducted between June 5, 2008, and October 10, 2010, of 369 men aged 45 years or older, with a peak urinary flow rate of at least 4 mL/s, an American Urological Association Symptom Index (AUASI) score of between 8 and 24 at 2 screening visits, and no exclusions. One, 2, and then 3 doses (320 mg/d) of saw palmetto extract or placebo, with dose increases at 24 and 48 weeks. Difference in AUASI score between baseline and 72 weeks. Secondary outcomes included measures of urinary bother, nocturia, peak uroflow, postvoid residual volume, prostate-specific antigen level, participants' global assessments, and indices of sexual function, continence, sleep quality, and prostatitis symptoms. Between baseline and 72 weeks, mean AUASI scores decreased from 14.42 to 12.22 points (-2.20 points; 95% CI, -3.04 to -1.36) [corrected]with saw palmetto extract and from 14.69 to 11.70 points (-2.99 points; 95% CI, -3.81 to -2.17) with placebo. The group mean difference in AUASI score change from baseline to 72 weeks between the saw palmetto extract and placebo groups was 0.79 points favoring placebo (upper bound of the 1-sided 95% CI most favorable to saw palmetto extract was 1.77 points, 1-sided P = .91). Saw palmetto extract was no more effective than placebo for any secondary outcome. No clearly attributable adverse effects were identified. Increasing doses of a saw palmetto fruit extract did not reduce lower urinary tract symptoms more than placebo. clinicaltrials.gov Identifier: NCT00603304.

  14. Semantic data association for planar features in outdoor 6D-SLAM using lidar

    NASA Astrophysics Data System (ADS)

    Ulas, C.; Temeltas, H.

    2013-05-01

    Simultaneous Localization and Mapping (SLAM) is a fundamental problem of the autonomous systems in GPS (Global Navigation System) denied environments. The traditional probabilistic SLAM methods uses point features as landmarks and hold all the feature positions in their state vector in addition to the robot pose. The bottleneck of the point-feature based SLAM methods is the data association problem, which are mostly based on a statistical measure. The data association performance is very critical for a robust SLAM method since all the filtering strategies are applied after a known correspondence. For point-features, two different but very close landmarks in the same scene might be confused while giving the correspondence decision when their positions and error covariance matrix are solely taking into account. Instead of using the point features, planar features can be considered as an alternative landmark model in the SLAM problem to be able to provide a more consistent data association. Planes contain rich information for the solution of the data association problem and can be distinguished easily with respect to point features. In addition, planar maps are very compact since an environment has only very limited number of planar structures. The planar features does not have to be large structures like building wall or roofs; the small plane segments can also be used as landmarks like billboards, traffic posts and some part of the bridges in urban areas. In this paper, a probabilistic plane-feature extraction method from 3DLiDAR data and the data association based on the extracted semantic information of the planar features is introduced. The experimental results show that the semantic data association provides very satisfactory result in outdoor 6D-SLAM.

  15. Synergistic cloud point extraction behavior of aluminum(III) with 2-methyl-8-quinolinol and 3,5-dichlorophenol.

    PubMed

    Ohashi, Akira; Tsuguchi, Akira; Imura, Hisanori; Ohashi, Kousaburo

    2004-07-01

    The cloud point extraction behavior of aluminum(III) with 8-quinolinol (HQ) or 2-methyl-8-quinolinol (HMQ) and Triton X-100 was investigated in the absence and presence of 3,5-dichlorophenol (Hdcp). Aluminum(III) was almost extracted with HQ and 4(v/v)% Triton X-100 above pH 5.0, but was not extracted with HMQ-Triton X-100. However, in the presence of Hdcp, it was almost quantitatively extracted with HMQ-Triton X-100. The synergistic effect of Hdcp on the extraction of aluminum(III) with HMQ and Triton X-100 may be caused by the formation of a mixed-ligand complex, Al(dcp)(MQ)2.

  16. Comparison of results from simple expressions for MOSFET parameter extraction

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Lin, Y.-S.

    1988-01-01

    In this paper results are compared from a parameter extraction procedure applied to the linear, saturation, and subthreshold regions for enhancement-mode MOSFETs fabricated in a 3-micron CMOS process. The results indicate that the extracted parameters differ significantly depending on the extraction algorithm and the distribution of I-V data points. It was observed that KP values vary by 30 percent, VT values differ by 50 mV, and Delta L values differ by 1 micron. Thus for acceptance of wafers from foundries and for modeling purposes, the extraction method and data point distribution must be specified. In this paper measurement and extraction procedures that will allow a consistent evaluation of measured parameters are discussed.

  17. Evaluation of plasmid and genomic DNA calibrants used for the quantification of genetically modified organisms.

    PubMed

    Caprioara-Buda, M; Meyer, W; Jeynov, B; Corbisier, P; Trapmann, S; Emons, H

    2012-07-01

    The reliable quantification of genetically modified organisms (GMOs) by real-time PCR requires, besides thoroughly validated quantitative detection methods, sustainable calibration systems. The latter establishes the anchor points for the measured value and the measurement unit, respectively. In this paper, the suitability of two types of DNA calibrants, i.e. plasmid DNA and genomic DNA extracted from plant leaves, for the certification of the GMO content in reference materials as copy number ratio between two targeted DNA sequences was investigated. The PCR efficiencies and coefficients of determination of the calibration curves as well as the measured copy number ratios for three powder certified reference materials (CRMs), namely ERM-BF415e (NK603 maize), ERM-BF425c (356043 soya), and ERM-BF427c (98140 maize), originally certified for their mass fraction of GMO, were compared for both types of calibrants. In all three systems investigated, the PCR efficiencies of plasmid DNA were slightly closer to the PCR efficiencies observed for the genomic DNA extracted from seed powders rather than those of the genomic DNA extracted from leaves. Although the mean DNA copy number ratios for each CRM overlapped within their uncertainties, the DNA copy number ratios were significantly different using the two types of calibrants. Based on these observations, both plasmid and leaf genomic DNA calibrants would be technically suitable as anchor points for the calibration of the real-time PCR methods applied in this study. However, the most suitable approach to establish a sustainable traceability chain is to fix a reference system based on plasmid DNA.

  18. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    NASA Astrophysics Data System (ADS)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  19. Exact extraction method for road rutting laser lines

    NASA Astrophysics Data System (ADS)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  20. Highly efficient maximum power point tracking using DC-DC coupled inductor single-ended primary inductance converter for photovoltaic power systems

    NASA Astrophysics Data System (ADS)

    Quamruzzaman, M.; Mohammad, Nur; Matin, M. A.; Alam, M. R.

    2016-10-01

    Solar photovoltaics (PVs) have nonlinear voltage-current characteristics, with a distinct maximum power point (MPP) depending on factors such as solar irradiance and operating temperature. To extract maximum power from the PV array at any environmental condition, DC-DC converters are usually used as MPP trackers. This paper presents the performance analysis of a coupled inductor single-ended primary inductance converter for maximum power point tracking (MPPT) in a PV system. A detailed model of the system has been designed and developed in MATLAB/Simulink. The performance evaluation has been conducted on the basis of stability, current ripple reduction and efficiency at different operating conditions. Simulation results show considerable ripple reduction in the input and output currents of the converter. Both the MPPT and converter efficiencies are significantly improved. The obtained simulation results validate the effectiveness and suitability of the converter model in MPPT and show reasonable agreement with the theoretical analysis.

  1. Three-dimensional tracking for efficient fire fighting in complex situations

    NASA Astrophysics Data System (ADS)

    Akhloufi, Moulay; Rossi, Lucile

    2009-05-01

    Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.

  2. Method for separating water soluble organics from a process stream by aqueous biphasic extraction

    DOEpatents

    Chaiko, David J.; Mego, William A.

    1999-01-01

    A method for separating water-miscible organic species from a process stream by aqueous biphasic extraction is provided. An aqueous biphase system is generated by contacting a process stream comprised of water, salt, and organic species with an aqueous polymer solution. The organic species transfer from the salt-rich phase to the polymer-rich phase, and the phases are separated. Next, the polymer is recovered from the loaded polymer phase by selectively extracting the polymer into an organic phase at an elevated temperature, while the organic species remain in a substantially salt-free aqueous solution. Alternatively, the polymer is recovered from the loaded polymer by a temperature induced phase separation (cloud point extraction), whereby the polymer and the organic species separate into two distinct solutions. The method for separating water-miscible organic species is applicable to the treatment of industrial wastewater streams, including the extraction and recovery of complexed metal ions from salt solutions, organic contaminants from mineral processing streams, and colorants from spent dye baths.

  3. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.

  4. Self-powered switch-controlled nucleic acid extraction system.

    PubMed

    Han, Kyungsup; Yoon, Yong-Jin; Shin, Yong; Park, Mi Kyoung

    2016-01-07

    Over the past few decades, lab-on-a-chip (LOC) technologies have played a great role in revolutionizing the way in vitro medical diagnostics are conducted and transforming bulky and expensive laboratory instruments and labour-intensive tests into easy to use, cost-effective miniaturized systems with faster analysis time, which can be used for near-patient or point-of-care (POC) tests. Fluidic pumps and valves are among the key components for LOC systems; however, they often require on-line electrical power or batteries and make the whole system bulky and complex, therefore limiting its application to POC testing especially in low-resource setting. This is particularly problematic for molecular diagnostics where multi-step sample processing (e.g. lysing, washing, elution) is necessary. In this work, we have developed a self-powered switch-controlled nucleic acid extraction system (SSNES). The main components of SSNES are a powerless vacuum actuator using two disposable syringes and a switchgear made of PMMA blocks and an O-ring. In the vacuum actuator, an opened syringe and a blocked syringe are bound together and act as a working syringe and an actuating syringe, respectively. The negative pressure in the opened syringe is generated by a restoring force of the compressed air inside the blocked syringe and utilized as the vacuum source. The Venus symbol shape of the switchgear provides multiple functions including being a reagent reservoir, a push-button for the vacuum actuator, and an on-off valve. The SSNES consists of three sets of vacuum actuators, switchgears and microfluidic components. The entire system can be easily fabricated and is fully disposable. We have successfully demonstrated DNA extraction from a urine sample using a dimethyl adipimidate (DMA)-based extraction method and the performance of the DNA extraction has been confirmed by genetic (HRAS) analysis of DNA biomarkers from the extracted DNAs using the SSNES. Therefore, the SSNES can be widely used as a powerless and disposable system for DNA extraction and the syringe-based vacuum actuator would be easily utilized for diverse applications with various microchannels as a powerless fluidic pump.

  5. Transmembrane myosin chitin synthase involved in mollusc shell formation produced in Dictyostelium is active

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenitzer, Veronika; Universitaet Regensburg, Biochemie I, Universitaetsstrasse 31, D-93053 Regensburg; Eichner, Norbert

    Highlights: Black-Right-Pointing-Pointer Dictyostelium produces the 264 kDa myosin chitin synthase of bivalve mollusc Atrina. Black-Right-Pointing-Pointer Chitin synthase activity releases chitin, partly associated with the cell surface. Black-Right-Pointing-Pointer Membrane extracts of transgenic slime molds produce radiolabeled chitin in vitro. Black-Right-Pointing-Pointer Chitin producing Dictyostelium cells can be characterized by atomic force microscopy. Black-Right-Pointing-Pointer This model system enables us to study initial processes of chitin biomineralization. -- Abstract: Several mollusc shells contain chitin, which is formed by a transmembrane myosin motor enzyme. This protein could be involved in sensing mechanical and structural changes of the forming, mineralizing extracellular matrix. Here we report themore » heterologous expression of the transmembrane myosin chitin synthase Ar-CS1 of the bivalve mollusc Atrina rigida (2286 amino acid residues, M.W. 264 kDa/monomer) in Dictyostelium discoideum, a model organism for myosin motor proteins. Confocal laser scanning immunofluorescence microscopy (CLSM), chitin binding GFP detection of chitin on cells and released to the cell culture medium, and a radiochemical activity assay of membrane extracts revealed expression and enzymatic activity of the mollusc chitin synthase in transgenic slime mold cells. First high-resolution atomic force microscopy (AFM) images of Ar-CS1 transformed cellulose synthase deficient D. discoideumdcsA{sup -} cell lines are shown.« less

  6. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  7. Ionic liquid-based aqueous biphasic systems as a versatile tool for the recovery of antioxidant compounds.

    PubMed

    Santos, João H; e Silva, Francisca A; Ventura, Sónia P M; Coutinho, João A P; de Souza, Ranyere L; Soares, Cleide M F; Lima, Álvaro S

    2015-01-01

    The comparative evaluation of distinct types of ionic liquid-based aqueous biphasic systems (IL-ABS) and more conventional polymer/salt-based ABS to the extraction of two antioxidants, eugenol and propyl gallate, is focused. In a first approach, IL-ABS composed of ILs and potassium citrate (C6H5K3O7/C6H8O7) buffer at pH 7 were applied to the extraction of two antioxidants, enabling the assessment of the impact of IL cation core on the extraction. The second approach uses ABS composed of polyethylene glycol (PEG) and potassium phosphate (K2HPO4/KH2PO4) buffer at pH 7 with imidazolium-based ILs as adjuvants. Their application to the extraction of the compounds allowed the investigation of the impact of the presence/absence of IL, the PEG molecular weight, and the alkyl side chain length of the imidazolium cation on the partition. It is possible to maximize the extractive performance of both antioxidants up to 100% using both types of IL-ABS. The IL enhances the performance of ABS technology. The data puts in evidence the pivotal role of the appropriate selection of the ABS components and design to develop a successful extractive process, from both environmental and performance points of view. © 2014 American Institute of Chemical Engineers.

  8. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  9. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  10. Design of barrier bucket kicker control system

    NASA Astrophysics Data System (ADS)

    Ni, Fa-Fu; Wang, Yan-Yu; Yin, Jun; Zhou, De-Tai; Shen, Guo-Dong; Zheng, Yang-De.; Zhang, Jian-Chuan; Yin, Jia; Bai, Xiao; Ma, Xiao-Li

    2018-05-01

    The Heavy-Ion Research Facility in Lanzhou (HIRFL) contains two synchrotrons: the main cooler storage ring (CSRm) and the experimental cooler storage ring (CSRe). Beams are extracted from CSRm, and injected into CSRe. To apply the Barrier Bucket (BB) method on the CSRe beam accumulation, a new BB technology based kicker control system was designed and implemented. The controller of the system is implemented using an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) chip and a field-programmable gate array (FPGA) chip. Within the architecture, ARM is responsible for data presetting and floating number arithmetic processing. The FPGA computes the RF phase point of the two rings and offers more accurate control of the time delay. An online preliminary experiment on HIRFL was also designed to verify the functionalities of the control system. The result shows that the reference trigger point of two different sinusoidal RF signals for an arbitrary phase point was acquired with a matched phase error below 1° (approximately 2.1 ns), and the step delay time better than 2 ns were realized.

  11. Spatiotemporal attention operator using isotropic contrast and regional homogeneity

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman; Lakhssassi, Ahmed; Zaremba, Marek

    2011-04-01

    A multiscale operator for spatiotemporal isotropic attention is proposed to reliably extract attention points during image sequence analysis. Its consecutive local maxima indicate attention points as the centers of image fragments of variable size with high intensity contrast, region homogeneity, regional shape saliency, and temporal change presence. The scale-adaptive estimation of temporal change (motion) and its aggregation with the regional shape saliency contribute to the accurate determination of attention points in image sequences. Multilocation descriptors of an image sequence are extracted at the attention points in the form of a set of multidimensional descriptor vectors. A fast recursive implementation is also proposed to make the operator's computational complexity independent from the spatial scale size, which is the window size in the spatial averaging filter. Experiments on the accuracy of attention-point detection have proved the operator consistency and its high potential for multiscale feature extraction from image sequences.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.

    Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less

  13. On-line lab-in-syringe cloud point extraction for the spectrophotometric determination of antimony.

    PubMed

    Frizzarin, Rejane M; Portugal, Lindomar A; Estela, José M; Rocha, Fábio R P; Cerdà, Victor

    2016-02-01

    Most of the procedures for antimony determination require time-consuming sample preparation (e.g. liquid-liquid extraction with organic solvents), which are harmful to the environment. Because of the high antimony toxicity, a rapid, sensitive and greener procedure for its determination becomes necessary. The goal of this work was to develop an analytical procedure exploiting for the first time the cloud point extraction on a lab-in-syringe flow system aiming at the spectrophotometric determination of antimony. The procedure was based on formation of an ion-pair between the antimony-iodide complex and H(+) followed by extraction with Triton X-114. The factorial design showed that the concentrations of ascorbic acid, H2SO4 and Triton X-114, as well as second and third order interactions were significant at the 95% confidence level. A Box-Behnken design was applied to obtain the response surfaces and to identify the critical values. System is robust at the 95% confidence level. A linear response was observed from 5 to 50 µg L(-1), described by the equation A=0.137+0.050C(Sb) (r=0.998). The detection limit (99.7% confidence level), the coefficient of variation (n=5; 15 µg L(-1)) and the sampling rate was estimated at 1.8 µg L(-1), 1.6% and 16 h(-1), respectively. The procedure allows quantification of antimony in the concentrations established by environmental legislation (6 µg L(-1)) and it was successfully applied to the determination of antimony in freshwater samples and antileishmanial drugs, yielding results in agreement with those obtained by HGFAAS at the 95% confidence level. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. TEES 2.2: Biomedical Event Extraction for Diverse Corpora

    PubMed Central

    2015-01-01

    Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925

  15. TEES 2.2: Biomedical Event Extraction for Diverse Corpora.

    PubMed

    Björne, Jari; Salakoski, Tapio

    2015-01-01

    The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierret, C.; Maunoury, L.; Biri, S.

    The goal of this article is to present simulations on the extraction from an electron cyclotron resonance ion source (ECRIS). The aim of this work is to find out an extraction system, which allows one to reduce the emittances and to increase the current of the extracted ion beam at the focal point of the analyzing dipole. But first, we should locate the correct software which is able to reproduce the specific physics of an ion beam. To perform the simulations, the following softwares have been tested: SIMION 3D, AXCEL, CPO 3D, and especially, for the magnetic field calculation, MATHEMATICAmore » coupled with the RADIA module. Emittance calculations have been done with two types of ECRIS: one with a hexapole and one without a hexapole, and the difference will be discussed.« less

  17. Extracting nursing practice patterns from structured labor and delivery data sets.

    PubMed

    Hall, Eric S; Thornton, Sidney N

    2007-10-11

    This study was designed to demonstrate the feasibility of a computerized care process model that provides real-time case profiling and outcome forecasting. A methodology was defined for extracting nursing practice patterns from structured point-of-care data collected using the labor and delivery information system at Intermountain Healthcare. Data collected during January 2006 were retrieved from Intermountain Healthcare's enterprise data warehouse for use in the study. The knowledge discovery in databases process provided a framework for data analysis including data selection, preprocessing, data-mining, and evaluation. Development of an interactive data-mining tool and construction of a data model for stratification of patient records into profiles supported the goals of the study. Five benefits of the practice pattern extraction capability, which extend to other clinical domains, are listed with supporting examples.

  18. Integration of carboxyl modified magnetic particles and aqueous two-phase extraction for selective separation of proteins.

    PubMed

    Gai, Qingqing; Qu, Feng; Zhang, Tao; Zhang, Yukui

    2011-07-15

    Both of the magnetic particle adsorption and aqueous two-phase extraction (ATPE) were simple, fast and low-cost method for protein separation. Selective proteins adsorption by carboxyl modified magnetic particles was investigated according to protein isoelectric point, solution pH and ionic strength. Aqueous two-phase system of PEG/sulphate exhibited selective separation and extraction for proteins before and after magnetic adsorption. The two combination ways, magnetic adsorption followed by ATPE and ATPE followed by magnetic adsorption, for the separation of proteins mixture of lysozyme, bovine serum albumin, trypsin, cytochrome C and myloglobin were discussed and compared. The way of magnetic adsorption followed by ATPE was also applied to human serum separation. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. A new dispersive liquid-liquid microextraction using ionic liquid based microemulsion coupled with cloud point extraction for determination of copper in serum and water samples.

    PubMed

    Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem

    2016-04-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  1. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  2. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  3. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  4. A quantitative study of gully erosion based on object-oriented analysis techniques: a case study in Beiyanzikou catchment of Qixia, Shandong, China.

    PubMed

    Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo

    2014-01-01

    This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m(2), 5074.1790 m(3) and 1316.1250 m(2), 1591.5784 m(3), respectively. The results of the study provide a new method for the quantitative study of small gully erosion.

  5. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  6. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  7. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  8. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  9. Chemical and biological extraction of metals present in E waste: A hybrid technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pant, Deepak, E-mail: deepakpant1@rediffmail.com; Joshi, Deepika; Upreti, Manoj K.

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer Hybrid methodology for E waste management. Black-Right-Pointing-Pointer Efficient extraction of metals. Black-Right-Pointing-Pointer Trace metal extraction is possible. - Abstract: Management of metal pollution associated with E-waste is widespread across the globe. Currently used techniques for the extraction of metals from E-waste by using either chemical or biological leaching have their own limitations. Chemical leaching is much rapid and efficient but has its own environmental consequences, even the future prospects of associated nanoremediation are also uncertain. Biological leaching on the other hand is comparatively a cost effective technique but at the same moment it is time consuming and themore » complete recovery of the metal, alone by biological leaching is not possible in most of the cases. The current review addresses the individual issues related to chemical and biological extraction techniques and proposes a hybrid-methodology which incorporates both, along with safer chemicals and compatible microbes for better and efficient extraction of metals from the E-waste.« less

  10. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  11. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  12. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  13. Influence of different extraction methods on the yield and linalool content of the extracts of Eugenia uniflora L.

    PubMed

    Galhiane, Mário S; Rissato, Sandra R; Chierice, Gilberto O; Almeida, Marcos V; Silva, Letícia C

    2006-09-15

    This work has been developed using a sylvestral fruit tree, native to the Brazilian forest, the Eugenia uniflora L., one of the Mirtaceae family. The main goal of the analytical study was focused on extraction methods themselves. The method development pointed to the Clevenger extraction as the best yield in relation to SFE and Soxhlet. The SFE method presented a good yield but showed a big amount of components in the final extract, demonstrating low selectivity. The essential oil extracted was analyzed by GC/FID showing a large range of polarity and boiling point compounds, where linalool, a widely used compound, was identified. Furthermore, an analytical solid phase extraction method was used to clean it up and obtain separated classes of compounds that were fractionated and studied by GC/FID and GC/MS.

  14. Diagnostic Performance of Electronic Syndromic Surveillance Systems in Acute Care

    PubMed Central

    Kashiouris, M.; O’Horo, J.C.; Pickering, B.W.; Herasevich, V.

    2013-01-01

    Context Healthcare Electronic Syndromic Surveillance (ESS) is the systematic collection, analysis and interpretation of ongoing clinical data with subsequent dissemination of results, which aid clinical decision-making. Objective To evaluate, classify and analyze the diagnostic performance, strengths and limitations of existing acute care ESS systems. Data Sources All available to us studies in Ovid MEDLINE, Ovid EMBASE, CINAHL and Scopus databases, from as early as January 1972 through the first week of September 2012. Study Selection: Prospective and retrospective trials, examining the diagnostic performance of inpatient ESS and providing objective diagnostic data including sensitivity, specificity, positive and negative predictive values. Data Extraction Two independent reviewers extracted diagnostic performance data on ESS systems, including clinical area, number of decision points, sensitivity and specificity. Positive and negative likelihood ratios were calculated for each healthcare ESS system. A likelihood matrix summarizing the various ESS systems performance was created. Results The described search strategy yielded 1639 articles. Of these, 1497 were excluded on abstract information. After full text review, abstraction and arbitration with a third reviewer, 33 studies met inclusion criteria, reporting 102,611 ESS decision points. The yielded I2 was high (98.8%), precluding meta-analysis. Performance was variable, with sensitivities ranging from 21% –100% and specificities ranging from 5%-100%. Conclusions There is significant heterogeneity in the diagnostic performance of the available ESS implements in acute care, stemming from the wide spectrum of different clinical entities and ESS systems. Based on the results, we introduce a conceptual framework using a likelihood ratio matrix for evaluation and meaningful application of future, frontline clinical decision support systems. PMID:23874359

  15. Advanced Optimal Extraction for the Spitzer/IRS

    NASA Astrophysics Data System (ADS)

    Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.

    2010-02-01

    We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.

  16. Description of borehole geophysical and geologist logs, Berks Sand Pit Superfund Site, Longswamp Township, Berks County, Pennsylvania

    USGS Publications Warehouse

    Low, Dennis J.; Conger, Randall W.

    2003-01-01

    Between October 2002 and January 2003, geophysical logging was conducted in six boreholes at the Berks Sand Pit Superfund Site, Longswamp Township, Berks County, Pa., to determine (1) the waterproducing zones, water-receiving zones, zones of vertical borehole flow, orientation of fractures, and borehole and casing depth; and (2) the hydraulic interconnection between the six boreholes and the site extraction well. The boreholes range in depth from 61 to 270 feet. Geophysical logging included collection of caliper, natural-gamma, single-point-resistance, fluid-temperature, fluid-flow, and acoustic-televiewer logs. Caliper and acoustic-televiewer logs were used to locate fractures, joints, and weathered zones. Inflections on fluid-temperature and single-point-resistance logs indicated possible water-bearing fractures, and flowmeter measurements verified these locations. Single-point-resistance, natural-gamma, and geologist logs provided information on stratigraphy. Flowmeter measurements were conducted while the site extraction well was pumping and when it was inactive to determine the hydraulic connections between the extraction well and the boreholes.Borehole geophysical logging and heatpulse flowmetering indicate active flow in the boreholes. Two of the boreholes are in ground-water discharge areas, two boreholes are in ground-water recharge areas, and one borehole is in an intermediate regime. Flow was not determined in one borehole. Heatpulse flowmetering, in conjunction with the geologist logs, indicates highly weathered zones in the granitic gneiss can be permeable and effective transmitters of water, confirming the presence of a two-tiered ground-water-flow system. The effort to determine a hydraulic connection between the site extraction well and six logged boreholes was not conclusive. Three boreholes showed decreases in depth to water after pumping of the site extraction well; in two boreholes, the depth to water increased. One borehole was cased its entire depth and was not revisited after it was logged by the caliper log. Substantial change in flow rates or direction of borehole flow was not observed in any of the three wells logged with the heatpulse flowmeter when the site extraction well was pumping and when it was inactive.

  17. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  18. Extracting the Information Backbone in Online System

    PubMed Central

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  19. Sustained subconjunctival protein delivery using a thermosetting gel delivery system.

    PubMed

    Rieke, Erin R; Amaral, Juan; Becerra, S Patricia; Lutz, Robert J

    2010-02-01

    An effective treatment modality for posterior eye diseases would provide prolonged delivery of therapeutic agents, including macromolecules, to eye tissues using a safe and minimally invasive method. The goal of this study was to assess the ability of a thermosetting gel to deliver a fluorescently labeled protein, Alexa 647 ovalbumin, to the choroid and retina of rats following a single subconjunctival injection of the gel. Additional experiments were performed to compare in vitro to in vivo ovalbumin release rates from the gel. The ovalbumin content of the eye tissues was monitored by spectrophotometric assays of tissue extracts of Alexa 647 ovalbumin from dissected sclera, choroid, and retina at time points ranging from 2 h to 14 days. At the same time points, fluorescence microscopy images of tissue samples were also obtained. Measurement of intact ovalbumin was verified by LDS-PAGE analysis of the tissue extract solutions. In vitro release of Alexa 488 ovalbumin into 37 degrees C PBS solutions from ovalbumin-loaded gel pellets was also monitored over time by spectrophotometric assay. In vivo ovalbumin release rates were determined by measurement of residual ovalbumin extracted from gel pellets removed from rat eyes at various time intervals. Our results indicate that ovalbumin concentrations can be maintained at measurable levels in the sclera, choroid, and retina of rats for up to 14 days using the thermosetting gel delivery system. The concentration of ovalbumin exhibited a gradient that decreased from sclera to choroid and to retina. The in vitro release rate profiles were similar to the in vivo release profiles. Our findings suggest that the thermosetting gel system may be a feasible method for safe and convenient sustained delivery of proteins to choroidal and retinal tissue in the posterior segments of the eye.

  20. Sustained Subconjunctival Protein Delivery Using a Thermosetting Gel Delivery System

    PubMed Central

    2010-01-01

    Purpose: An effective treatment modality for posterior eye diseases would provide prolonged delivery of therapeutic agents, including macromolecules, to eye tissues using a safe and minimally invasive method. The goal of this study was to assess the ability of a thermosetting gel to deliver a fluorescently labeled protein, Alexa 647 ovalbumin, to the choroid and retina of rats following a single subconjunctival injection of the gel. Additional experiments were performed to compare in vitro to in vivo ovalbumin release rates from the gel. Methods: The ovalbumin content of the eye tissues was monitored by spectrophotometric assays of tissue extracts of Alexa 647 ovalbumin from dissected sclera, choroid, and retina at time points ranging from 2 h to 14 days. At the same time points, fluorescence microscopy images of tissue samples were also obtained. Measurement of intact ovalbumin was verified by LDS-PAGE analysis of the tissue extract solutions. In vitro release of Alexa 488 ovalbumin into 37°C PBS solutions from ovalbumin-loaded gel pellets was also monitored over time by spectrophotometric assay. In vivo ovalbumin release rates were determined by measurement of residual ovalbumin extracted from gel pellets removed from rat eyes at various time intervals. Results: Our results indicate that ovalbumin concentrations can be maintained at measurable levels in the sclera, choroid, and retina of rats for up to 14 days using the thermosetting gel delivery system. The concentration of ovalbumin exhibited a gradient that decreased from sclera to choroid and to retina. The in vitro release rate profiles were similar to the in vivo release profiles. Conclusions: Our findings suggest that the thermosetting gel system may be a feasible method for safe and convenient sustained delivery of proteins to choroidal and retinal tissue in the posterior segments of the eye. PMID:20148655

  1. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  2. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  3. Equilibrium-point control of human elbow-joint movement under isometric environment by using multichannel functional electrical stimulation

    PubMed Central

    Matsui, Kazuhiro; Hishii, Yasuo; Maegaki, Kazuya; Yamashita, Yuto; Uemura, Mitsunori; Hirai, Hiroaki; Miyazaki, Fumio

    2014-01-01

    Functional electrical stimulation (FES) is considered an effective technique for aiding quadriplegic persons. However, the human musculoskeletal system has highly non-linearity and redundancy. It is thus difficult to stably and accurately control limbs using FES. In this paper, we propose a simple FES method that is consistent with the motion-control mechanism observed in humans. We focus on joint motion by a pair of agonist-antagonist muscles of the musculoskeletal system, and define the “electrical agonist-antagonist muscle ratio (EAA ratio)” and “electrical agonist-antagonist muscle activity (EAA activity)” in light of the agonist-antagonist muscle ratio and agonist-antagonist muscle activity, respectively, to extract the equilibrium point and joint stiffness from electromyography (EMG) signals. These notions, the agonist-antagonist muscle ratio and agonist-antagonist muscle activity, are based on the hypothesis that the equilibrium point and stiffness of the agonist-antagonist motion system are controlled by the central nervous system. We derived the transfer function between the input EAA ratio and force output of the end-point. We performed some experiments in an isometric environment using six subjects. This transfer-function model is expressed as a cascade-coupled dead time element and a second-order system. High-speed, high-precision, smooth control of the hand force were achieved through the agonist-antagonist muscle stimulation pattern determined by this transfer function model. PMID:24987326

  4. Equilibrium-point control of human elbow-joint movement under isometric environment by using multichannel functional electrical stimulation.

    PubMed

    Matsui, Kazuhiro; Hishii, Yasuo; Maegaki, Kazuya; Yamashita, Yuto; Uemura, Mitsunori; Hirai, Hiroaki; Miyazaki, Fumio

    2014-01-01

    Functional electrical stimulation (FES) is considered an effective technique for aiding quadriplegic persons. However, the human musculoskeletal system has highly non-linearity and redundancy. It is thus difficult to stably and accurately control limbs using FES. In this paper, we propose a simple FES method that is consistent with the motion-control mechanism observed in humans. We focus on joint motion by a pair of agonist-antagonist muscles of the musculoskeletal system, and define the "electrical agonist-antagonist muscle ratio (EAA ratio)" and "electrical agonist-antagonist muscle activity (EAA activity)" in light of the agonist-antagonist muscle ratio and agonist-antagonist muscle activity, respectively, to extract the equilibrium point and joint stiffness from electromyography (EMG) signals. These notions, the agonist-antagonist muscle ratio and agonist-antagonist muscle activity, are based on the hypothesis that the equilibrium point and stiffness of the agonist-antagonist motion system are controlled by the central nervous system. We derived the transfer function between the input EAA ratio and force output of the end-point. We performed some experiments in an isometric environment using six subjects. This transfer-function model is expressed as a cascade-coupled dead time element and a second-order system. High-speed, high-precision, smooth control of the hand force were achieved through the agonist-antagonist muscle stimulation pattern determined by this transfer function model.

  5. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  6. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  7. Triton X-114 based cloud point extraction: a thermoreversible approach for separation/concentration and dispersion of nanomaterials in the aqueous phase.

    PubMed

    Liu, Jing-fu; Liu, Rui; Yin, Yong-guang; Jiang, Gui-bin

    2009-03-28

    Capable of preserving the sizes and shapes of nanomaterials during the phase transferring, Triton X-114 based cloud point extraction provides a general, simple, and cost-effective route for reversible concentration/separation or dispersion of various nanomaterials in the aqueous phase.

  8. 77 FR 19282 - Draft NPDES General Permit for Discharges From the Oil and Gas Extraction Point Source Category...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9650-8] Draft NPDES General Permit for Discharges From the Oil and Gas Extraction Point Source Category to Coastal Waters in Texas (TXG330000) AGENCY: Environmental Protection Agency (EPA). ACTION: Proposal of NPDES General Permit Renewal. SUMMARY: EPA Region 6...

  9. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  10. Monitoring Aircraft Motion at Airports by LIDAR

    NASA Astrophysics Data System (ADS)

    Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.

    2016-06-01

    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.

  11. The Design of Case Products’ Shape Form Information Database Based on NURBS Surface

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Liu, Guo-zhong; Xu, Nuo-qi; Zhang, Wei-she

    2017-07-01

    In order to improve the computer design of product shape design,applying the Non-uniform Rational B-splines(NURBS) of curves and surfaces surface to the representation of the product shape helps designers to design the product effectively.On the basis of the typical product image contour extraction and using Pro/Engineer(Pro/E) to extract the geometric feature of scanning mold,in order to structure the information data base system of value point,control point and node vector parameter information,this paper put forward a unified expression method of using NURBS curves and surfaces to describe products’ geometric shape and using matrix laboratory(MATLAB) to simulate when products have the same or similar function.A case study of electric vehicle’s front cover illustrates the access process of geometric shape information of case product in this paper.This method can not only greatly reduce the capacity of information debate,but also improve the effectiveness of computer aided geometric innovation modeling.

  12. Ultrasonically Modified Amended-Cloud Point Extraction for Simultaneous Pre-Concentration of Neonicotinoid Insecticide Residues.

    PubMed

    Kachangoon, Rawikan; Vichapong, Jitlada; Burakham, Rodjana; Santaladchaiyakit, Yanawath; Srijaranai, Supalax

    2018-05-12

    An effective pre-concentration method, namely amended-cloud point extraction (CPE), has been developed for the extraction and pre-concentration of neonicotinoid insecticide residues. The studied analytes including clothianidin, imidacloprid, acetamiprid, thiamethoxam and thiacloprid were chosen as a model compound. The amended-CPE procedure included two cloud point processes. Triton™ X-114 was used to extract neonicotinoid residues into the surfactant-rich phase and then the analytes were transferred into an alkaline solution with the help of ultrasound energy. The extracts were then analyzed by high-performance liquid chromatography (HPLC) coupled with a monolithic column. Several factors influencing the extraction efficiency were studied such as kind and concentration of surfactant, type and content of salts, kind and concentration of back extraction agent, and incubation temperature and time. Enrichment factors (EFs) were found in the range of 20⁻333 folds. The limits of detection of the studied neonicotinoids were in the range of 0.0003⁻0.002 µg mL −1 which are below the maximum residue limits (MRLs) established by the European Union (EU). Good repeatability was obtained with relative standard deviations lower than 1.92% and 4.54% for retention time ( t R ) and peak area, respectively. The developed extraction method was successfully applied for the analysis of water samples. No detectable residues of neonicotinoids in the studied samples were found.

  13. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Jr., Edward J.; Parrish, Robert H.; Russell, James R.

    2008-06-24

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  14. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R.

    2000-01-01

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  15. Polymer-based alternative method to extract bromelain from pineapple peel waste.

    PubMed

    Novaes, Letícia Celia de Lencastre; Ebinuma, Valéria de Carvalho Santos; Mazzola, Priscila Gava; Pessoa, Adalberto

    2013-01-01

    Bromelain is a mixture of proteolytic enzymes present in all tissues of the pineapple (Ananas comosus Merr.), and it is known for its clinical therapeutic applications, food processing, and as a dietary supplement. The use of pineapple waste for bromelain extraction is interesting from both an environmental and a commercial point of view, because the protease has relevant clinical potential. We aimed to study the optimization of bromelain extraction from pineapple waste, using the aqueous two-phase system formed by polyethylene glycol (PEG) and poly(acrylic acid). In this work, bromelain partitioned preferentially to the top/PEG-rich phase and, in the best condition, achieved a yield of 335.27% with a purification factor of 25.78. The statistical analysis showed that all variables analyzed were significant to the process. © 2013 International Union of Biochemistry and Molecular Biology, Inc.

  16. On-chip wavelength multiplexed detection of cancer DNA biomarkers in blood

    PubMed Central

    Cai, H.; Stott, M. A.; Ozcelik, D.; Parks, J. W.; Hawkins, A. R.; Schmidt, H.

    2016-01-01

    We have developed an optofluidic analysis system that processes biomolecular samples starting from whole blood and then analyzes and identifies multiple targets on a silicon-based molecular detection platform. We demonstrate blood filtration, sample extraction, target enrichment, and fluorescent labeling using programmable microfluidic circuits. We detect and identify multiple targets using a spectral multiplexing technique based on wavelength-dependent multi-spot excitation on an antiresonant reflecting optical waveguide chip. Specifically, we extract two types of melanoma biomarkers, mutated cell-free nucleic acids —BRAFV600E and NRAS, from whole blood. We detect and identify these two targets simultaneously using the spectral multiplexing approach with up to a 96% success rate. These results point the way toward a full front-to-back chip-based optofluidic compact system for high-performance analysis of complex biological samples. PMID:28058082

  17. Thermo Dynamics and Economics Evaluations: Substitution of the Extraction Steam with the Wasted Heat of Flue Gas

    NASA Astrophysics Data System (ADS)

    Hao, Lifen; Qiu, Lixia; Li, Jinping; Li, Dongxiong

    2018-01-01

    A new heat supplying system is proposed that utilizes the exhausted gas of the boiler to substitute the extraction steam from the turbine as the driving force for the adsorption heat pump regarding the recovery of the condensation heat of power plant. However, our system is not subject to the low efficiency of wasted heat utilization due to the low temperature of flue gas, which hence possesses higher performance in COP factors in the utilization of heat than that of the conventional techniques of using flues gas, so the amount of extracted gas from turbine can be reduced and the power generate rate be enhanced. Subsequently, detailed evaluation of the performance of this system in the point of views of thermodynamics and economics are presented in this work. For the instance of a 330 MW heat supply unit, 5 sample cities are chosen to demonstrate and confirm our economic analysis. It is revealed that when the heating coefficient of the heat pump is 1.8, the investment payback periods for these 5 cities are within the range of 2.4 to 4.8 years, which are far below the service year of the heat pump, demonstrating remarkable economic benefits for our system.

  18. "Bligh and Dyer" and Folch Methods for Solid-Liquid-Liquid Extraction of Lipids from Microorganisms. Comprehension of Solvatation Mechanisms and towards Substitution with Alternative Solvents.

    PubMed

    Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid

    2017-03-27

    Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are "gold standards" for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid-liquid extraction of yeast ( Yarrowia lipolytica IFP29 ) and subsequent liquid-liquid partition-the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid-liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol-chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity.

  19. “Bligh and Dyer” and Folch Methods for Solid–Liquid–Liquid Extraction of Lipids from Microorganisms. Comprehension of Solvatation Mechanisms and towards Substitution with Alternative Solvents

    PubMed Central

    Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid

    2017-01-01

    Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are “gold standards” for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid–liquid extraction of yeast (Yarrowia lipolytica IFP29) and subsequent liquid–liquid partition—the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid–liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol–chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity. PMID:28346372

  20. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  1. ANL/RBC: A computer code for the analysis of Rankine bottoming cycles, including system cost evaluation and off-design performance

    NASA Technical Reports Server (NTRS)

    Mclennan, G. A.

    1986-01-01

    This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.

  2. A photogrammetry-based system for 3D surface reconstruction of prosthetics and orthotics.

    PubMed

    Li, Guang-kun; Gao, Fan; Wang, Zhi-gang

    2011-01-01

    The objective of this study is to develop an innovative close range digital photogrammetry (CRDP) system using the commercial digital SLR cameras to measure and reconstruct the 3D surface of prosthetics and orthotics. This paper describes the instrumentation, techniques and preliminary results of the proposed system. The technique works by taking pictures of the object from multiple view angles. The series of pictures were post-processed via feature point extraction, point match and 3D surface reconstruction. In comparison with the traditional method such as laser scanning, the major advantages of our instrument include the lower cost, compact and easy-to-use hardware, satisfactory measurement accuracy, and significantly less measurement time. Besides its potential applications in prosthetics and orthotics surface measurement, the simple setup and its ease of use will make it suitable for various 3D surface reconstructions.

  3. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  4. A new sum parameter to estimate the bioconcentration and baseline-toxicity of hydrophobic compounds in river water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loon, W.M.G.M. van; Hermens, J.L.M.

    1994-12-31

    A large part of all aquatic pollutants can be classified as narcosis-type (baseline toxicity) chemicals. Many chemicals contribute to a joint baseline aquatic toxicity even at trace concentrations. A novel surrogate parameter, which simulated bioconcentration of hydrophobic substances from water and estimates internal molar concentrations, has been explored by Verhaar et al.. These estimated biological concentrations can be used to predict narcosis-type toxic effects, using the Lethal Body Burden (LBB) concept. The authors applied this toxicological-analytical concept to river water, and some recent technological developments and field results are pointed out. The simulation of bioconcentration is performed by extracting watermore » samples with empore{trademark} disks. The authors developed two extraction procedures; i.e., laboratory extraction and field extraction. Molar concentrations measurements are performed using vapor pressure osmometry, GC-FID and GC-MS. Results on the molar concentrations of hydrophobic compounds which can be bioaccumulated from several Dutch river systems will be presented.« less

  5. Excitation power quantities in phase resonance testing of nonlinear systems with phase-locked-loop excitation

    NASA Astrophysics Data System (ADS)

    Peter, Simon; Leine, Remco I.

    2017-11-01

    Phase resonance testing is one method for the experimental extraction of nonlinear normal modes. This paper proposes a novel method for nonlinear phase resonance testing. Firstly, the issue of appropriate excitation is approached on the basis of excitation power considerations. Therefore, power quantities known from nonlinear systems theory in electrical engineering are transferred to nonlinear structural dynamics applications. A new power-based nonlinear mode indicator function is derived, which is generally applicable, reliable and easy to implement in experiments. Secondly, the tuning of the excitation phase is automated by the use of a Phase-Locked-Loop controller. This method provides a very user-friendly and fast way for obtaining the backbone curve. Furthermore, the method allows to exploit specific advantages of phase control such as the robustness for lightly damped systems and the stabilization of unstable branches of the frequency response. The reduced tuning time for the excitation makes the commonly used free-decay measurements for the extraction of backbone curves unnecessary. Instead, steady-state measurements for every point of the curve are obtained. In conjunction with the new mode indicator function, the correlation of every measured point with the associated nonlinear normal mode of the underlying conservative system can be evaluated. Moreover, it is shown that the analysis of the excitation power helps to locate sources of inaccuracies in the force appropriation process. The method is illustrated by a numerical example and its functionality in experiments is demonstrated on a benchmark beam structure.

  6. Total Protein Extraction for Metaproteomics Analysis of Methane Producing Biofilm: The Effects of Detergents

    PubMed Central

    Huang, Hung-Jen; Chen, Wei-Yu; Wu, Jer-Horng

    2014-01-01

    Protein recovery is crucial for shotgun metaproteomics to study the in situ functionality of microbial populations from complex biofilms but still poorly addressed by far. To fill this knowledge gap, we systematically evaluated the sample preparation with extraction buffers comprising four detergents for the metaproteomics analysis of a terephthalate-degrading methanogenic biofilm using an on-line two-dimensional liquid chromatography tandem mass spectrometry (2D-LC-MS/MS) system. Totally, 1018 non-repeated proteins were identified with the four treatments. On the whole, each treatment could recover the biofilm proteins with specific distributions of molecular weight, hydrophobicity, and isoelectric point. The extraction buffers containing zwitterionic and anionic detergents were found to harvest the proteins with better efficiency and quality, allowing identification up to 76.2% of total identified proteins with the LC-MS/MS analysis. According to the annotation with a relevant metagenomic database, we further observed different taxonomic profiles of bacterial and archaeal members and discriminable patterns of the functional expression among the extraction buffers used. Overall, the finding of the present study provides first insight to the effect of the detergents on the characteristics of extractable proteins from biofilm and the developed protocol combined with nano 2D-LC/MS/MS analysis can improve the metaproteomics studies on microbial functionality of biofilms in the wastewater treatment systems. PMID:24914765

  7. Effective recovery of poly-β-hydroxybutyrate (PHB) biopolymer from Cupriavidus necator using a novel and environmentally friendly solvent system.

    PubMed

    Fei, Tao; Cazeneuve, Stacy; Wen, Zhiyou; Wu, Lei; Wang, Tong

    2016-05-01

    This work demonstrates a significant advance in bioprocessing for a high-melting lipid polymer. A novel and environmental friendly solvent mixture, acetone/ethanol/propylene carbonate (A/E/P, 1:1:1 v/v/v) was identified for extracting poly-hydroxybutyrate (PHB), a high-value biopolymer, from Cupriavidus necator. A set of solubility curves of PHB in various solvents was established. PHB recovery of 85% and purity of 92% were obtained from defatted dry biomass (DDB) using A/E/P. This solvent mixture is compatible with water, and from non-defatted wet biomass, PHB recovery of 83% and purity of 90% were achieved. Water and hexane were evaluated as anti-solvents to assist PHB precipitation, and hexane improved recovery of PHB from biomass to 92% and the purity to 93%. A scale-up extraction and separation reactor was designed, built and successfully tested. Properties of PHB recovered were not significantly affected by the extraction solvent and conditions, as shown by average molecular weight (1.4 × 10(6) ) and melting point (175.2°C) not being different from PHB extracted using chloroform. Therefore, this biorenewable solvent system was effective and versatile for extracting PHB biopolymers. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:678-685, 2016. © 2016 American Institute of Chemical Engineers.

  8. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  9. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  10. Research on Methods of High Coherent Target Extraction in Urban Area Based on Psinsar Technology

    NASA Astrophysics Data System (ADS)

    Li, N.; Wu, J.

    2018-04-01

    PSInSAR technology has been widely applied in ground deformation monitoring. Accurate identification of Persistent Scatterers (PS) is key to the success of PSInSAR data processing. In this paper, the theoretic models and specific algorithms of PS point extraction methods are summarized and the characteristics and applicable conditions of each method, such as Coherence Coefficient Threshold method, Amplitude Threshold method, Dispersion of Amplitude method, Dispersion of Intensity method, are analyzed. Based on the merits and demerits of different methods, an improved method for PS point extraction in urban area is proposed, that uses simultaneously backscattering characteristic, amplitude and phase stability to find PS point in all pixels. Shanghai city is chosen as an example area for checking the improvements of the new method. The results show that the PS points extracted by the new method have high quality, high stability and meet the strong scattering characteristics. Based on these high quality PS points, the deformation rate along the line-of-sight (LOS) in the central urban area of Shanghai is obtained by using 35 COSMO-SkyMed X-band SAR images acquired from 2008 to 2010 and it varies from -14.6 mm/year to 4.9 mm/year. There is a large sedimentation funnel in the cross boundary of Hongkou and Yangpu district with a maximum sedimentation rate of more than 14 mm per year. The obtained ground subsidence rates are also compared with the result of spirit leveling and show good consistent. Our new method for PS point extraction is more reasonable, and can improve the accuracy of the obtained deformation results.

  11. Equilibrium Speciation of Select Lanthanides in the Presence of Acidic Ligands in Homo- and Heterogeneous Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Troy A

    2011-08-01

    This dissertation explores lanthanide speciation in liquid solution systems related to separation schemes involving the acidic ligands: bis(2-ethylhexyl) phosphoric acid (HDEHP), lactate, and 8-hydroxyquinoline. Equilibrium speciation of neodymium (Nd 3+), sodium (Na+), HDEHP, water, and lactate in the TALSPEAK liquid-liquid extraction system was explored under varied Nd 3+ loading of HDEHP in the organic phase and through extraction from aqueous HCl and lactate media. System speciation was probed through vapor pressure osmometry, visible and Fourier Transform Infrared (FTIR) spectroscopy, 22Na and 13C labeled lactate radiotracer distribution measurements, Karl Fischer titrations, and equilibrium pH measurements. Distribution of Nd 3+, Na +,more » lactate, and equilibrium pH were modeled using the SXLSQI software to obtain logKNd and logKNa extraction constants under selected conditions. Results showed that high Nd 3+ loading of the HDEHP led to Nd 3+ speciation that departs from the ion exchange mechanism and includes formation of highly aggregated, polynuclear [NdLactate(DEHP) 2] x; (with x > 1). By substituting lanthanum (La 3+) for Nd 3+ in this system, NMR scoping experiments using 23Na, 31P nuclei and 13C labeled lactate were performed. Results indicated that this technique is sensitive to changes in system speciation, and that further experiments are warranted. In a homogeneous system representing the TALSPEAK aqueous phase, Lactate protonation behavior at various temperatures was characterized using a combination of potentiometric titration and modeling with the Hyperquad computer program. The temperature dependent deprotonation behavior of lactate showed little change with temperature at 2.0 M NaCl ionic strength. Cloud point extraction is a non-traditional separation technique that starts with a homogeneous phase that becomes heterogeneous by the micellization of surfactants through the increase of temperature. To better understand the behavior of europium (Eu 3+) and 8-hydroxyquinoline under cloud point extraction conditions, potentiometric and spectrophotometric titrations coupled with modeling with Hyperquad and SQUAD computer programs were performed to assess europium (Eu 3+) and 8-hydroxyquinoline speciation. Experiments in both water and a 1wt% Triton X-114/water mixed solvent were compared to understand the effect of Triton X-114 on the system speciation. Results indicated that increased solvation of 8-hydroxyquinoline by the mixed solvent lead to more stable complexes involving 8-hydroxyquinoline than in water, whereas competition between hydroxide and Triton X-114 for Eu 3+ led to lower stability hydrolysis complexes in the mixed solvent than in water. Lanthanide speciation is challenging due to the trivalent oxidation state that leads to multiple ligand complexes, including some mixed complexes. The complexity of the system demands well-designed and precise experiments that capture the nuances of the chemistry. This work increased the understanding of lanthanide speciation in the explored systems, but more work is required to produce a comprehensive understanding of the speciation involved.« less

  12. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  13. Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction

    ERIC Educational Resources Information Center

    Khalafi, Lida; Doolittle, Pamela; Wright, John

    2018-01-01

    A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…

  14. 78 FR 72080 - Draft NPDES General Permit Modification for Discharges From the Oil and Gas Extraction Point...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-02

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9903-65-Region-6] Draft NPDES General Permit Modification for Discharges From the Oil and Gas Extraction Point Source Category to Coastal Waters in Texas and Onshore Stripper Well Category East of The 98th Meridian (TXG330000) AGENCY: Environmental Protection Agency (EPA...

  15. Ocular dynamics of systemic aqueous extracts of Xylopia aethiopica (African guinea pepper) seeds on visually active volunteers.

    PubMed

    Igwe, S A; Afonne, J C; Ghasi, S I

    2003-06-01

    Xylopia aethiopica, African guinea pepper, is an angiosperm belonging to the family Annonecae, and used mainly as spice and in traditional medicine. The ocular dynamics of bolus consumption of 300 mg total dose was undertaken on visually active volunteers with a view to finding its ocular effects or complications. Results showed that the aqueous extract of X. aethiopica was neither a miotic nor a mydriatic, but lowered the intraocular pressure (17.48%), reduced the near point of convergence (31.1%) and increased the amplitude of accommodation (8.98%) which are positively correlated (r=0.95). On the other hand, the systemic extract had no effect on the visual acuity at far and near as well as the phoria status at the appropriate distances. The convergence excess resulted in esophoria and the increased amplitude of accommodation placed greater demand on accommodation mechanism without any discomfort. The nonspecific mechanism of action makes it a safer spice which can be exploited in the management of exophoria and raised intraocular pressure (glaucoma) in instances where the efficacy of the older conventional drugs is insufficient.

  16. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  17. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance of the system. The qualitative and quantitative measures were based on visual inspection and by comparing the measured coordinates to the reference data respectively. In the course of this process, a mean area coverage of 98% was achieved for Dar Es Salaam test sites, which globally indicated that the extracted building polygons were close to the ground truth data. Furthermore, the proposed system saved time to extract a single building by 32%. Although the extracted building polygons are within the perimeter of ground truth data, visually some of the extracted building polygons were somewhat distorted. This implies that interactive post-editing process is necessary for cartographic representation.

  18. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  19. A Quantitative Study of Gully Erosion Based on Object-Oriented Analysis Techniques: A Case Study in Beiyanzikou Catchment of Qixia, Shandong, China

    PubMed Central

    Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo

    2014-01-01

    This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m2, 5074.1790 m3 and 1316.1250 m2, 1591.5784 m3, respectively. The results of the study provide a new method for the quantitative study of small gully erosion. PMID:24616626

  20. An evaluation of the bioaccessibility of arsenic in corn and rice samples based on cloud point extraction and hydride generation coupled to atomic fluorescence spectrometry.

    PubMed

    Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor

    2016-08-01

    A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Spatio-Temporal Pattern Mining on Trajectory Data Using Arm

    NASA Astrophysics Data System (ADS)

    Khoshahval, S.; Farnaghi, M.; Taleai, M.

    2017-09-01

    Preliminary mobile was considered to be a device to make human connections easier. But today the consumption of this device has been evolved to a platform for gaming, web surfing and GPS-enabled application capabilities. Embedding GPS in handheld devices, altered them to significant trajectory data gathering facilities. Raw GPS trajectory data is a series of points which contains hidden information. For revealing hidden information in traces, trajectory data analysis is needed. One of the most beneficial concealed information in trajectory data is user activity patterns. In each pattern, there are multiple stops and moves which identifies users visited places and tasks. This paper proposes an approach to discover user daily activity patterns from GPS trajectories using association rules. Finding user patterns needs extraction of user's visited places from stops and moves of GPS trajectories. In order to locate stops and moves, we have implemented a place recognition algorithm. After extraction of visited points an advanced association rule mining algorithm, called Apriori was used to extract user activity patterns. This study outlined that there are useful patterns in each trajectory that can be emerged from raw GPS data using association rule mining techniques in order to find out about multiple users' behaviour in a system and can be utilized in various location-based applications.

  2. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Damien P.; Mooij, Sander; Postma, Marieke, E-mail: dpg39@cam.ac.uk, E-mail: sander.mooij@ing.uchile.cl, E-mail: mpostma@nikhef.nl

    We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a differentmore » treatment of the Goldstone bosons.« less

  4. Thermal power systems, point-focusing distributed receiver technology project. Volume 2: Detailed report

    NASA Technical Reports Server (NTRS)

    Lucas, J.

    1979-01-01

    Thermal or electrical power from the sun's radiated energy through Point-Focusing Distributed Receiver technology is the goal of this Project. The energy thus produced must be economically competitive with other sources. The Project supports the industrial development of technology and hardware for extracting energy from solar power to achieve the stated goal. Present studies are working to concentrate the solar energy through mirrors or lenses, to a working fluid or gas, and through a power converter change to an energy source useful to man. Rankine-cycle and Brayton-cycle engines are currently being developed as the most promising energy converters for our near future needs.

  5. Haughton-Mars Project/NASA 2006 Lunar Medical Contingency Simulation: Equipment and Methods for Medical Evacuation of an Injured Crewmember

    NASA Technical Reports Server (NTRS)

    Chappell, S. P.; Scheuring, R. A.; Jones, J. A.; Lee, P.; Comtois, J. M.; Chase, T.; Gernhardt M.; Wilkinson, N.

    2007-01-01

    Introduction: Achieving NASA's Space Exploration Vision scientific objectives will require human access into cratered and uneven terrain for the purpose of sample acquisition to assess geological, and perhaps even biological features and experiments. Operational risk management is critical to safely conduct the anticipated tasks. This strategy, along with associated contingency plans, will be a driver of EVA system requirements. Therefore, a medical contingency EVA scenario was performed with the Haughton-Mars Project/NASA to develop belay and medical evacuation techniques for exploration and rescue respectively. Methods: A rescue system to allow two rescuer astronauts to evacuate one in incapacitated astronaut was evaluated. The systems main components were a hard-bottomed rescue litter, hand-operated winch, rope, ground picket anchors, and a rover-winch attachment adapter. Evaluation was performed on 15-25deg slopes of dirt with embedded rock. The winch was anchored either by adapter to the rover or by pickets hammered into the ground. The litter was pulled over the surface by rope attached to the winch. Results: The rescue system was utilized effectively to extract the injured astronaut up a slope and to a waiting rover for transport to a simulated habitat for advanced medical care, although several challenges to implementation were identified and overcome. Rotational stabilization of the winch was found to be important to get maximize mechanical advantage from the extraction system. Discussion: Further research and testing needs to be performed to be able to fully consider synergies with the other Exploration surface systems, in conducting contingency operations. Structural attachment points on the surface EVA suits may be critical to assist in incapacitated evacuation. Such attach points could be helpful in microgravity incapacitated crewmember transport as well. Wheeled utility carts or wheels that may be attachable to a litter may also aid in extraction and transport. Utilizing parts of the rover (e.g. seats) to deploy as a litter may be considered. Testing in simulated 1/6-g to determine feasibility of winch operation and anchor establishment will further reduce implementation uncertainties.

  6. CORFIG- CORRECTOR SURFACE DESIGN SOFTWARE

    NASA Technical Reports Server (NTRS)

    Dantzler, A.

    1994-01-01

    Corrector Surface Design Software, CORFIG, calculates the optimum figure of a corrector surface for an optical system based on real ray traces. CORFIG generates the corrector figure in the form of a spline data point table and/or a list of polynomial coefficients. The number of spline data points as well as the number of coefficients is user specified. First, the optical system's parameters (thickness, radii of curvature, etc.) are entered. CORFIG will trace the outermost axial real ray through the uncorrected system to determine approximate radial limits for all rays. Then, several real rays are traced backwards through the system from the image to the surface that originally followed the object, within these radial limits. At this first surface, the local curvature is adjusted on a small scale to direct the rays toward the object, thus removing any accumulated aberrations. For each ray traced, this adjustment will be different, so that at the end of this process the resultant surface is made up of many local curvatures. The equations that describe these local surfaces, expressed as high order polynomials, are then solved simultaneously to yield the final surface figure, from which data points are extracted. Finally, a spline table or list of polynomial coefficients is extracted from these data points. CORFIG is intended to be used in the late stages of optical design. The system's design must have at least a good paraxial foundation. Preferably, the design should be at a stage where traditional methods of Seidel aberration correction will not bring about the required image spot size specification. CORFIG will read the system parameters of such a design and calculate the optimum figure for the first surface such that all of the original parameters remain unchanged. Depending upon the system, CORFIG can reduce the RMS image spot radius by a factor of 5 to 25. The original parameters (magnification, back focal length, etc.) are maintained because all rays upon which the corrector figure is based are traced within the bounds of the original system's outermost ray. For this reason the original system must have a certain degree of integrity. CORFIG optimizes the corrector surface figure for on-axis images at a single wavelength only. However, it has been demonstrated many times that CORFIG's method also significantly improves the quality of field images and images formed from wavelengths other than the center wavelength. CORFIG is written completely in VAX FORTRAN. It has been implemented on a DEC VAX series computer under VMS with a central memory requirement of 55 K bytes. This program was developed in 1986.

  7. Geometric correction and digital elevation extraction using multiple MTI datasets

    USGS Publications Warehouse

    Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.

    2007-01-01

    Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.

  8. Asymptotic safety of gravity-matter systems

    NASA Astrophysics Data System (ADS)

    Meibohm, J.; Pawlowski, J. M.; Reichert, M.

    2016-04-01

    We study the ultraviolet stability of gravity-matter systems for general numbers of minimally coupled scalars and fermions. This is done within the functional renormalization group setup put forward in [N. Christiansen, B. Knorr, J. Meibohm, J. M. Pawlowski, and M. Reichert, Phys. Rev. D 92, 121501 (2015).] for pure gravity. It includes full dynamical propagators and a genuine dynamical Newton's coupling, which is extracted from the graviton three-point function. We find ultraviolet stability of general gravity-fermion systems. Gravity-scalar systems are also found to be ultraviolet stable within validity bounds for the chosen generic class of regulators, based on the size of the anomalous dimension. Remarkably, the ultraviolet fixed points for the dynamical couplings are found to be significantly different from those of their associated background counterparts, once matter fields are included. In summary, the asymptotic safety scenario does not put constraints on the matter content of the theory within the validity bounds for the chosen generic class of regulators.

  9. Novel imaging closed loop control strategy for heliostats

    NASA Astrophysics Data System (ADS)

    Bern, Gregor; Schöttl, Peter; Heimsath, Anna; Nitz, Peter

    2017-06-01

    Central Receiver Systems use up to thousands of heliostats to concentrate solar radiation. The precise control of heliostat aiming points is crucial not only for efficiency but also for reliable plant operation. Besides the calibration of open loop control systems, closed loop tracking strategies are developed to address a precise and efficient aiming strategy. The need for cost reductions in the heliostat field intensifies the motivation for economic closed loop control systems. This work introduces an approach for a closed loop heliostat tracking strategy using image analysis and signal modulation. The approach aims at the extraction of heliostat focal spot position within the receiver domain by means of a centralized remote vision system decoupled from the rough conditions close to the focal area. Taking an image sequence of the receiver while modulating a signal on different heliostats, their aiming points are retrieved. The work describes the methodology and shows first results from simulations and practical tests performed in small scale, motivating further investigation and deployment.

  10. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  11. Molecular Biogeochemistry of Modern and Ancient Marine Microbes

    DTIC Science & Technology

    2010-02-01

    number distributions in the late Archean bitumens fall within the range of compositions of Phanerozoic petroleum ( gray line in Fig. 7), suggesting that...bitumen extracts. The gray line indicates the range of compositions observed in Phanerozoic petroleum systems, from the GeoMark Reservoir Fluid Database...than that of mRNA are attributable to noisy, non-cycling protein timecourses ( gray points above 1:1 line). For clarity, only genes whose protein

  12. Coley's Lessons Remembered: Augmenting Mistletoe Therapy.

    PubMed

    Orange, Maurice; Reuter, Uwe; Hobohm, Uwe

    2016-12-01

    The following four observations point in the same direction, namely that there is an unleveraged potential for stimulating the innate immune system against cancer: (1) experimental treatments with bacterial extracts more than 100 years ago by Coley and contemporaries, (2) a positive correlation between spontaneous regressions and febrile infection, (3) epidemiological data suggesting an inverse correlation between a history of infection and the likelihood of developing cancer, and (4) our recent finding that a cocktail of pattern recognition receptor ligands (PRRLs) can eradicate solid tumors in cancer mice if applied metronomically. Because the main immunostimulating component of mistletoe extract (ME), mistletoe lectin, has been shown to be a PRRL as well, we suggest to apply ME in combination with additional PRRLs. Additional PRRLs can be found in approved drugs already on the market. Therefore, augmentation of ME might be feasible, with the aim of reattaining the old successes using approved drugs rather than bacterial extracts. © The Author(s) 2016.

  13. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    NASA Astrophysics Data System (ADS)

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-03-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.

  14. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  15. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    PubMed Central

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-01-01

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188

  16. A cost-effective laser scanning method for mapping stream channel geometry and roughness

    NASA Astrophysics Data System (ADS)

    Lam, Norris; Nathanson, Marcus; Lundgren, Niclas; Rehnström, Robin; Lyon, Steve

    2015-04-01

    In this pilot project, we combine an Arduino Uno and SICK LMS111 outdoor laser ranging camera to acquire high resolution topographic area scans for a stream channel. The microprocessor and imaging system was installed in a custom gondola and suspended from a wire cable system. To demonstrate the systems capabilities for capturing stream channel topography, a small stream (< 2m wide) in the Krycklan Catchment Study was temporarily diverted and scanned. Area scans along the stream channel resulted in a point spacing of 4mm and a point cloud density of 5600 points/m2 for the 5m by 2m area. A grain size distribution of the streambed material was extracted from the point cloud using a moving window, local maxima search algorithm. The median, 84th and 90th percentiles (common metrics to describe channel roughness) of this distribution were found to be within the range of measured values while the largest modelled element was approximately 35% smaller than its measured counterpart. The laser scanning system captured grain sizes between 30mm and 255mm (coarse gravel/pebbles and boulders based on the Wentworth (1922) scale). This demonstrates that our system was capable of resolving both large-scale geometry (e.g. bed slope and stream channel width) and small-scale channel roughness elements (e.g. coarse gravel/pebbles and boulders) for the study area. We further show that the point cloud resolution is suitable for estimating ecohydraulic parameters such as Manning's n and hydraulic radius. Although more work is needed to fine-tune our system's design, these preliminary results are encouraging, specifically for those with a limited operational budget.

  17. Assessment of metal(loid)s phytoavailability in intensive agricultural soils by the application of single extractions to rhizosphere soil.

    PubMed

    Pinto, Edgar; Almeida, Agostinho A; Ferreira, Isabel M P L V O

    2015-03-01

    The influence of soil properties on the phytoavailability of metal(loid)s in a soil-plant system was evaluated. The content of extractable metal(loid)s obtained by using different extraction methods was also compared. To perform this study, a test plant (Lactuca sativa) and rhizosphere soil were sampled at 5 different time points (2, 4, 6, 8 and 10 weeks of plant growth). Four extraction methods (Mehlich 3, DTPA, NH4NO3 and CaCl2) were used. Significant positive correlations between the soil extractable content and lettuce shoot content were obtained for several metal(loid)s. The extraction with NH4NO3 showed the higher number of strong positive correlations indicating the suitability of this method to estimate metal(loid)s phytoavailability. The soil CEC, OM, pH, texture and oxides content significantly influenced the distribution of metal(loid)s between the phytoavailable and non-phytoavailable fractions. A reliable prediction model for Cr, V, Ni, As, Pb, Co, Cd, and Sb phytoavailability was obtained considering the amount of metal(loid) extracted by the NH4NO3 method and the main soil properties. This work shows that the analysis of rhizosphere soil by single extractions methods is a reliable approach to estimate metal(loid)s phytoavailability. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery

    NASA Astrophysics Data System (ADS)

    Cherkauer, Keith; Hearst, Anthony

    2015-04-01

    Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.

  19. Dynamic Gate Product and Artifact Generation from System Models

    NASA Technical Reports Server (NTRS)

    Jackson, Maddalena; Delp, Christopher; Bindschadler, Duane; Sarrel, Marc; Wollaeger, Ryan; Lam, Doris

    2011-01-01

    Model Based Systems Engineering (MBSE) is gaining acceptance as a way to formalize systems engineering practice through the use of models. The traditional method of producing and managing a plethora of disjointed documents and presentations ("Power-Point Engineering") has proven both costly and limiting as a means to manage the complex and sophisticated specifications of modern space systems. We have developed a tool and method to produce sophisticated artifacts as views and by-products of integrated models, allowing us to minimize the practice of "Power-Point Engineering" from model-based projects and demonstrate the ability of MBSE to work within and supersede traditional engineering practices. This paper describes how we have created and successfully used model-based document generation techniques to extract paper artifacts from complex SysML and UML models in support of successful project reviews. Use of formal SysML and UML models for architecture and system design enables production of review documents, textual artifacts, and analyses that are consistent with one-another and require virtually no labor-intensive maintenance across small-scale design changes and multiple authors. This effort thus enables approaches that focus more on rigorous engineering work and less on "PowerPoint engineering" and production of paper-based documents or their "office-productivity" file equivalents.

  20. Structure learning in action

    PubMed Central

    Braun, Daniel A.; Mehring, Carsten; Wolpert, Daniel M.

    2010-01-01

    ‘Learning to learn’ phenomena have been widely investigated in cognition, perception and more recently also in action. During concept learning tasks, for example, it has been suggested that characteristic features are abstracted from a set of examples with the consequence that learning of similar tasks is facilitated—a process termed ‘learning to learn’. From a computational point of view such an extraction of invariants can be regarded as learning of an underlying structure. Here we review the evidence for structure learning as a ‘learning to learn’ mechanism, especially in sensorimotor control where the motor system has to adapt to variable environments. We review studies demonstrating that common features of variable environments are extracted during sensorimotor learning and exploited for efficient adaptation in novel tasks. We conclude that structure learning plays a fundamental role in skill learning and may underlie the unsurpassed flexibility and adaptability of the motor system. PMID:19720086

  1. Contaminants in ventilated filling boxes

    NASA Astrophysics Data System (ADS)

    Bolster, D. T.; Linden, P. F.

    While energy efficiency is important, the adoption of energy-efficient ventilation systems still requires the provision of acceptable indoor air quality. Many low-energy systems, such as displacement or natural ventilation, rely on temperature stratification within the interior environment, always extracting the warmest air from the top of the room. Understanding buoyancy-driven convection in a confined ventilated space is key to understanding the flow that develops with many of these modern low-energy ventilation schemes. In this work we study the transport of an initially uniformly distributed passive contaminant in a displacement-ventilated space. Representing a heat source as an ideal sourced of buoyancy, analytical and numerical models are developed that allow us to compare the average efficiency of contaminant removal between traditional mixing and modern low-energy systems. A set of small-scale analogue laboratory experiments was also conducted to further validate our analytical and numerical solutions.We find that on average traditional and low-energy ventilation methods are similar with regard to pollutant flushing efficiency. This is because the concentration being extracted from the system at any given time is approximately the same for both systems. However, very different vertical concentration gradients exist. For the low-energy system, a peak in contaminant concentration occurs at the temperature interface that is established within the space. This interface is typically designed to sit at some intermediate height in the space. Since this peak does not coincide with the extraction point, displacement ventilation does not offer the same benefits for pollutant flushing as it does for buoyancy removal.

  2. Evolution of Information Management at the GSFC Earth Sciences (GES) Data and Information Services Center (DISC): 2006-2007

    NASA Technical Reports Server (NTRS)

    Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen

    2009-01-01

    Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007

  3. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  4. Bedside wellness--development of a virtual forest rehabilitation system.

    PubMed

    Ohsuga, M; Tatsuno, Y; Shimono, F; Hirasawa, K; Oyama, H; Okamura, H

    1998-01-01

    The present study aims at the development of a new concept system that will contribute toward improving the quality of life for bedridden patients and the elderly. The results of a basic study showed the possibility of a virtual reality system reducing stress and pain, provided VR sickness does not occur. A Bedside Wellness System which lets a person experience a virtual forest walk and provides a facility of rehabilitation was proposed based on the basic study and developed. An experiment to assess the developed system using healthy subjects was executed. The data suggested the positive effects of the system; however, some points to be improved were also extracted. After a few improvements, the system will be available for clinical use.

  5. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Quantum Quench Dynamics

    NASA Astrophysics Data System (ADS)

    Mitra, Aditi

    2018-03-01

    Quench dynamics is an active area of study encompassing condensed matter physics and quantum information, with applications to cold-atomic gases and pump-probe spectroscopy of materials. Recent theoretical progress in studying quantum quenches is reviewed. Quenches in interacting one-dimensional systems as well as systems in higher spatial dimensions are covered. The appearance of nontrivial steady states following a quench in exactly solvable models is discussed, and the stability of these states to perturbations is described. Proper conserving approximations needed to capture the onset of thermalization at long times are outlined. The appearance of universal scaling for quenches near critical points and the role of the renormalization group in capturing the transient regime are reviewed. Finally, the effect of quenches near critical points on the dynamics of entanglement entropy and entanglement statistics is discussed. The extraction of critical exponents from the entanglement statistics is outlined.

  7. Automated endoscopic navigation and advisory system from medical image

    NASA Astrophysics Data System (ADS)

    Kwoh, Chee K.; Khan, Gul N.; Gillies, Duncan F.

    1999-05-01

    In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a two-layer system. The first layer is at the signal level, which consists of the processing that will be performed on a series of images to extract all the identifiable features. The information is purely dependent on what can be extracted from the 'raw' images. At the signal level, the first task is performed by detecting a single dominant feature, lumen. Few methods of identifying the lumen are proposed. The first method used contour extraction. Contours are extracted by edge detection, thresholding and linking. This method required images to be divided into overlapping squares (8 by 8 or 4 by 4) where line segments are extracted by using a Hough transform. Perceptual criteria such as proximity, connectivity, similarity in orientation, contrast and edge pixel intensity, are used to group edges both strong and weak. This approach is called perceptual grouping. The second method is based on a region extraction using split and merge approach using spatial domain data. An n-level (for a 2' by 2' image) quadtree based pyramid structure is constructed to find the most homogenous large dark region, which in most cases corresponds to the lumen. The algorithm constructs the quadtree from the bottom (pixel) level upward, recursively and computes the mean and variance of image regions corresponding to quadtree nodes. On reaching the root, the largest uniform seed region, whose mean corresponds to a lumen is selected that is grown by merging with its neighboring regions. In addition to the use of two- dimensional information in the form of regions and contours, three-dimensional shape can provide additional information that will enhance the system capabilities. Shape or depth information from an image is estimated by various methods. A particular technique suitable for endoscopy is the shape from shading, which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.

  8. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  9. Modelisation et optimisation des systemes energetiques a l'aide d'algorithmes evolutifs

    NASA Astrophysics Data System (ADS)

    Hounkonnou, Sessinou M. William

    Optimization of thermal and nuclear plant has many economics advantages as well as environmentals. Therefore new operating points research and use of new tools to achieve those kind of optimization are the subject of many studies. In this momentum, this project is intended to optimize energetic systems precisely the secondary loop of Gentilly 2 nuclear plant using both the extraction of the high and low pressure turbine as well as the extraction of the mixture coming from the steam generator. A detailed thermodynamic model of the various equipment of the secondary loop such as the feed water heaters, the moisture separator-reheater, the dearator, the condenser and the turbine is carried out. We use Matlab software (version R2007b, 2007) with the library for the thermodynamic properties of water and steam (XSteam pour Matlab, Holmgren, 2006). A model of the secondary loop is than obtained thanks to the assembly of the different equipments. A simulation of the equipment and the complete cycle enabled us to release two objectifs functions knowing as the net output and the efficiency which evolve in an opposite way according to the variation of the extractions. Due to the complexity of the problem, we use a method based on the genetic algorithms for the optimization. More precisely we used a tool which was developed at the "Institut de genie nucleaire" named BEST (Boundary Exploration Search Technique) developed in VBA* (Visual BASIC for Application) for its ability to converge more quickly and to carry out a more exhaustive search at the border of the optimal solutions. The use of the DDE (Dynamic Data Exchange) enables us to link the simulator and the optimizer. The results obtained show us that they still exists several combinations of extractions which make it possible to obtain a better point of operation for the improvement of the performance of Gentilly 2 power station secondary loop. *Trademark of Microsoft

  10. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  11. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.

    PubMed

    Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.

  12. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context

    PubMed Central

    Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076

  13. Platform development for merging various information sources for water management: methodological, technical and operational aspects

    NASA Astrophysics Data System (ADS)

    Galvao, Diogo

    2013-04-01

    As a result of various economic, social and environmental factors, we can all experience the increase in importance of water resources at a global scale. As a consequence, we can also notice the increasing need of methods and systems capable of efficiently managing and combining the rich and heterogeneous data available that concerns, directly or indirectly, these water resources, such as in-situ monitoring station data, Earth Observation images and measurements, Meteorological modeling forecasts and Hydrological modeling. Under the scope of the MyWater project, we developed a water management system capable of satisfying just such needs, under a flexible platform capable of accommodating future challenges, not only in terms of sources of data but also on applicable models to extract information from it. From a methodological point of view, the MyWater platform obtains data from distinct sources, and in distinct formats, be they Satellite images or meteorological model forecasts, transforms and combines them in ways that allow them to be fed to a variety of hydrological models (such as MOHID Land, SIMGRO, etc…), which themselves can also be combined, using such approaches as those advocated by the OpenMI standard, to extract information in an automated and time efficient manner. Such an approach brings its own deal of challenges, and further research was developed under this project on the best ways to combine such data and on novel approaches to hydrological modeling (like the PriceXD model). From a technical point of view, the MyWater platform is structured according to a classical SOA architecture, with a flexible object oriented modular backend service responsible for all the model process management and data treatment, while the information extracted can be interacted with using a variety of frontends, from a web portal, including also a desktop client, down to mobile phone and tablet applications. From an operational point of view, a user can not only see these model results on graphically rich user interfaces, but also interact with them in ways that allows them to extract their own information. This platform was then applied to a variety of case studies in such countries as the Netherlands, Greece, Portugal, Brazil and Africa, to verify the practicality, accuracy and value that it brings to end users and stakeholders.

  14. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  15. Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Tuo, L.; Zlatanova, S.

    2012-07-01

    The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..

  16. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  17. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  18. Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.

    2018-04-01

    A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.

  19. Extraction and Separation Modeling of Orion Test Vehicles with ADAMS Simulation

    NASA Technical Reports Server (NTRS)

    Fraire, Usbaldo, Jr.; Anderson, Keith; Cuthbert, Peter A.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) project has increased efforts to demonstrate the performance of fully integrated parachute systems at both higher dynamic pressures and in the presence of wake fields using a Parachute Compartment Drop Test Vehicle (PCDTV) and a Parachute Test Vehicle (PTV), respectively. Modeling the extraction and separation events has proven challenging and an understanding of the physics is required to reduce the risk of separation malfunctions. The need for extraction and separation modeling is critical to a successful CPAS test campaign. Current PTV-alone simulations, such as Decelerator System Simulation (DSS), require accurate initial conditions (ICs) drawn from a separation model. Automatic Dynamic Analysis of Mechanical Systems (ADAMS), a Commercial off the Shelf (COTS) tool, was employed to provide insight into the multi-body six degree of freedom (DOF) interaction between parachute test hardware and external and internal forces. Components of the model include a composite extraction parachute, primary vehicle (PTV or PCDTV), platform cradle, a release mechanism, aircraft ramp, and a programmer parachute with attach points. Independent aerodynamic forces were applied to the mated test vehicle/platform cradle and the separated test vehicle and platform cradle. The aero coefficients were determined from real time lookup tables which were functions of both angle of attack ( ) and sideslip ( ). The atmospheric properties were also determined from a real time lookup table characteristic of the Yuma Proving Grounds (YPG) atmosphere relative to the planned test month. Representative geometries were constructed in ADAMS with measured mass properties generated for each independent vehicle. Derived smart separation parameters were included in ADAMS as sensors with defined pitch and pitch rate criteria used to refine inputs to analogous avionics systems for optimal separation conditions. Key design variables were dispersed in a Monte Carlo analysis to provide the maximum expected range of the state variables at programmer deployment to be used as ICs in DSS. Extensive comparisons were made with Decelerator System Simulation Application (DSSA) to validate the mated portion of the ADAMS extraction trajectory. Results of the comparisons improved the fidelity of ADAMS with a ramp pitch profile update from DSSA. Post-test reconstructions resulted in improvements to extraction parachute drag area knock-down factors, extraction line modeling, and the inclusion of ball-to-socket attachments used as a release mechanism on the PTV. Modeling of two Extraction parachutes was based on United States Air Force (USAF) tow test data and integrated into ADAMS for nominal and Monte Carlo trajectory assessments. Video overlay of ADAMS animations and actual C-12 chase plane test videos supported analysis and observation efforts of extraction and separation events. The COTS ADAMS simulation has been integrated with NASA based simulations to provide complete end to end trajectories with a focus on the extraction, separation, and programmer deployment sequence. The flexibility of modifying ADAMS inputs has proven useful for sensitivity studies and extraction/separation modeling efforts. 1

  20. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    PubMed

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.

  1. Ion energy distribution near a plasma meniscus with beam extraction for multi element focused ion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, Jose V.; Paul, Samit; Bhattacharjee, Sudeep

    2010-05-15

    An earlier study of the axial ion energy distribution in the extraction region (plasma meniscus) of a compact microwave plasma ion source showed that the axial ion energy spread near the meniscus is small ({approx}5 eV) and comparable to that of a liquid metal ion source, making it a promising candidate for focused ion beam (FIB) applications [J. V. Mathew and S. Bhattacharjee, J. Appl. Phys. 105, 96101 (2009)]. In the present work we have investigated the radial ion energy distribution (IED) under the influence of beam extraction. Initially a single Einzel lens system has been used for beam extractionmore » with potentials up to -6 kV for obtaining parallel beams. In situ measurements of IED with extraction voltages upto -5 kV indicates that beam extraction has a weak influence on the energy spread ({+-}0.5 eV) which is of significance from the point of view of FIB applications. It is found that by reducing the geometrical acceptance angle at the ion energy analyzer probe, close to unidirectional distribution can be obtained with a spread that is smaller by at least 1 eV.« less

  2. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  3. Automated extraction and semantic analysis of mutation impacts from the biomedical literature

    PubMed Central

    2012-01-01

    Background Mutations as sources of evolution have long been the focus of attention in the biomedical literature. Accessing the mutational information and their impacts on protein properties facilitates research in various domains, such as enzymology and pharmacology. However, manually curating the rich and fast growing repository of biomedical literature is expensive and time-consuming. As a solution, text mining approaches have increasingly been deployed in the biomedical domain. While the detection of single-point mutations is well covered by existing systems, challenges still exist in grounding impacts to their respective mutations and recognizing the affected protein properties, in particular kinetic and stability properties together with physical quantities. Results We present an ontology model for mutation impacts, together with a comprehensive text mining system for extracting and analysing mutation impact information from full-text articles. Organisms, as sources of proteins, are extracted to help disambiguation of genes and proteins. Our system then detects mutation series to correctly ground detected impacts using novel heuristics. It also extracts the affected protein properties, in particular kinetic and stability properties, as well as the magnitude of the effects and validates these relations against the domain ontology. The output of our system can be provided in various formats, in particular by populating an OWL-DL ontology, which can then be queried to provide structured information. The performance of the system is evaluated on our manually annotated corpora. In the impact detection task, our system achieves a precision of 70.4%-71.1%, a recall of 71.3%-71.5%, and grounds the detected impacts with an accuracy of 76.5%-77%. The developed system, including resources, evaluation data and end-user and developer documentation is freely available under an open source license at http://www.semanticsoftware.info/open-mutation-miner. Conclusion We present Open Mutation Miner (OMM), the first comprehensive, fully open-source approach to automatically extract impacts and related relevant information from the biomedical literature. We assessed the performance of our work on manually annotated corpora and the results show the reliability of our approach. The representation of the extracted information into a structured format facilitates knowledge management and aids in database curation and correction. Furthermore, access to the analysis results is provided through multiple interfaces, including web services for automated data integration and desktop-based solutions for end user interactions. PMID:22759648

  4. Revisiting the quantum Szilard engine with fully quantum considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hai; School of Information and Electronics Engineering, Shandong Institute of Business and Technology, Yantai 264000; Zou, Jian, E-mail: zoujian@bit.edu.cn

    2012-12-15

    By considering level shifting during the insertion process we revisit the quantum Szilard engine (QSZE) with fully quantum consideration. We derive the general expressions of the heat absorbed from thermal bath and the total work done to the environment by the system in a cycle with two different cyclic strategies. We find that only the quantum information contributes to the absorbed heat, and the classical information acts like a feedback controller and has no direct effect on the absorbed heat. This is the first demonstration of the different effects of quantum information and classical information for extracting heat from themore » bath in the QSZE. Moreover, when the well width L{yields}{infinity} or the temperature of the bath T{yields}{infinity} the QSZE reduces to the classical Szilard engine (CSZE), and the total work satisfies the relation W{sub tot}=k{sub B}Tln2 as obtained by Sang Wook Kim et al. [S.W. Kim, T. Sagawa, S. De Liberato, M. Ueda, Phys. Rev. Lett. 106 (2011) 070401] for one particle case. - Highlights: Black-Right-Pointing-Pointer For the first time analyze the QSZE by considering energy level shifts. Black-Right-Pointing-Pointer Find different roles played by classical and quantum information in the QSZE. Black-Right-Pointing-Pointer The amount of work extracted depends on the cyclic strategies of the QSZE. Black-Right-Pointing-Pointer Verify that the QSZE will reduce to the CSZE in the classical limits.« less

  5. Study on the traditional pattern retrieval method of minorities in Gansu province

    NASA Astrophysics Data System (ADS)

    Zheng, Gang; Wang, Beizhan; Sun, Yuchun; Xu, Jin

    2018-03-01

    The traditional patterns of ethnic minorities in gansu province are ethnic arts with strong ethnic characteristics. It is the crystallization of the hard work and wisdom of minority nationalities in gansu province. Unique traditional patterns of ethnic minorities in Gansu province with rich ethnic folk arts, is the crystallization of geographical environment in Gansu minority diligence and wisdom. By using the Surf feature point identification algorithm, the feature point extractor in OpenCV is used to extract the feature points. And the feature points are applied to compare the pattern features to find patterns similar to the artistic features. The application of this method can quickly or efficiently extract pattern information in a database.

  6. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  7. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  8. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment

    PubMed Central

    Lin, Fan; Xiao, Bin

    2017-01-01

    Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment. PMID:29088228

  9. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    NASA Astrophysics Data System (ADS)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  10. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment.

    PubMed

    Hong, Zhiling; Lin, Fan; Xiao, Bin

    2017-01-01

    Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.

  11. Practical low-cost visual communication using binary images for deaf sign language.

    PubMed

    Manoranjan, M D; Robinson, J A

    2000-03-01

    Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.

  12. The Raptor Real-Time Processing Architecture

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  13. Raptor -- Mining the Sky in Real Time

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.

    2004-06-01

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  14. Multistate metadynamics for automatic exploration of conical intersections

    NASA Astrophysics Data System (ADS)

    Lindner, Joachim O.; Röhr, Merle I. S.; Mitrić, Roland

    2018-05-01

    We introduce multistate metadynamics for automatic exploration of conical intersection seams between adiabatic Born-Oppenheimer potential energy surfaces in molecular systems. By choosing the energy gap between the electronic states as a collective variable the metadynamics drives the system from an arbitrary ground-state configuration toward the intersection seam. Upon reaching the seam, the multistate electronic Hamiltonian is extended by introducing biasing potentials into the off-diagonal elements, and the molecular dynamics is continued on a modified potential energy surface obtained by diagonalization of the latter. The off-diagonal bias serves to locally open the energy gap and push the system to the next intersection point. In this way, the conical intersection energy landscape can be explored, identifying minimum energy crossing points and the barriers separating them. We illustrate the method on the example of furan, a prototype organic molecule exhibiting rich photophysics. The multistate metadynamics reveals plateaus on the conical intersection energy landscape from which the minimum energy crossing points with characteristic geometries can be extracted. The method can be combined with the broad spectrum of electronic structure methods and represents a generally applicable tool for the exploration of photophysics and photochemistry in complex molecules and materials.

  15. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    NASA Astrophysics Data System (ADS)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  16. Design and fabrication of a duoplasmatron extraction geometry and LEBT for the LANSCE H{sup +} RFQ project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortgang, C. M., E-mail: cfortgang@lanl.gov; Batygin, Y. K.; Draganic, I. N.

    The 750-keV H{sup +} Cockcroft-Walton at LANSCE will be replaced with a recently fabricated 4-rod Radio Frequency Quadrupole (RFQ) with injection energy of 35 keV. The existing duoplasmatron source extraction optics need to be modified to produce up to 35 mA of H{sup +} current with an emittance <0.02 π-cm-mrad (rms, norm) for injection into the RFQ. Parts for the new source have been fabricated and assembly is in process. We will use the existing duoplasmatron source with a newly designed extraction system and low energy beam transport (LEBT) for beam injection into the RFQ. In addition to source modifications,more » we need a new LEBT for transport and matching into the RFQ. The LEBT uses two magnetic solenoids with enough drift space between them to accommodate diagnostics and a beam deflector. The LEBT is designed to work over a range of space-charge neutralized currents and emittances. The LEBT is optimized in the sense that it minimizes the beam size in both solenoids for a point design of a given neutralized current and emittance. Special attention has been given to estimating emittance growth due to source extraction optics and solenoid aberrations. Examples of source-to-RFQ matching and emittance growth (due to both non-linear space charge and solenoid aberrations) are presented over a range of currents and emittances about the design point. A mechanical layout drawing will be presented along with the status of the source and LEBT, design, and fabrication.« less

  17. Discriminative Features Mining for Offline Handwritten Signature Verification

    NASA Astrophysics Data System (ADS)

    Neamah, Karrar; Mohamad, Dzulkifli; Saba, Tanzila; Rehman, Amjad

    2014-03-01

    Signature verification is an active research area in the field of pattern recognition. It is employed to identify the particular person with the help of his/her signature's characteristics such as pen pressure, loops shape, speed of writing and up down motion of pen, writing speed, pen pressure, shape of loops, etc. in order to identify that person. However, in the entire process, features extraction and selection stage is of prime importance. Since several signatures have similar strokes, characteristics and sizes. Accordingly, this paper presents combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system. Promising results have proved the success of the integration of the two methods.

  18. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  19. Determination of parabens using two microextraction methods coupled with capillary liquid chromatography-UV detection.

    PubMed

    Chen, Chen-Wen; Hsu, Wen-Chan; Lu, Ya-Chen; Weng, Jing-Ru; Feng, Chia-Hsien

    2018-02-15

    Parabens are common preservatives and environmental hormones. As such, possible detrimental health effects could be amplified through their widespread use in foods, cosmetics, and pharmaceutical products. Thus, the determination of parabens in such products is of particular importance. This study explored vortex-assisted dispersive liquid-liquid microextraction techniques based on the solidification of a floating organic drop (VA-DLLME-SFO) and salt-assisted cloud point extraction (SA-CPE) for paraben extraction. Microanalysis was performed using a capillary liquid chromatography-ultraviolet detection system. These techniques were modified successfully to determine four parabens in 19 commercial products. The regression equations of these parabens exhibited good linearity (r 2 =0.998, 0.1-10μg/mL), good precision (RSD<5%) and accuracy (RE<5%), reduced reagent consumption and reaction times (<6min), and excellent sample versatility. VA-DLLME-SFO was also particularly convenient due to the use of a solidified extract. Thus, the VA-DLLME-SFO technique was better suited to the extraction of parabens from complex matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Feature extraction using first and second derivative extrema (FSDE) for real-time and hardware-efficient spike sorting.

    PubMed

    Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G

    2013-04-30

    Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Landfill mining: A critical review of two decades of research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krook, Joakim, E-mail: joakim.krook@liu.se; Svensson, Niclas; Eklund, Mats

    Highlights: Black-Right-Pointing-Pointer We analyze two decades of landfill mining research regarding trends and topics. Black-Right-Pointing-Pointer So far landfill mining has mainly been used to solve waste management issues. Black-Right-Pointing-Pointer A new perspective on landfills as resource reservoirs is emerging. Black-Right-Pointing-Pointer The potential of resource extraction from landfills is significant. Black-Right-Pointing-Pointer We outline several key challenges for realization of resource extraction from landfills. - Abstract: Landfills have historically been seen as the ultimate solution for storing waste at minimum cost. It is now a well-known fact that such deposits have related implications such as long-term methane emissions, local pollution concerns, settlingmore » issues and limitations on urban development. Landfill mining has been suggested as a strategy to address such problems, and in principle means the excavation, processing, treatment and/or recycling of deposited materials. This study involves a literature review on landfill mining covering a meta-analysis of the main trends, objectives, topics and findings in 39 research papers published during the period 1988-2008. The results show that, so far, landfill mining has primarily been seen as a way to solve traditional management issues related to landfills such as lack of landfill space and local pollution concerns. Although most initiatives have involved some recovery of deposited resources, mainly cover soil and in some cases waste fuel, recycling efforts have often been largely secondary. Typically, simple soil excavation and screening equipment have therefore been applied, often demonstrating moderate performance in obtaining marketable recyclables. Several worldwide changes and recent research findings indicate the emergence of a new perspective on landfills as reservoirs for resource extraction. Although the potential of this approach appears significant, it is argued that facilitating implementation involves a number of research challenges in terms of technology innovation, clarifying the conditions for realization and developing standardized frameworks for evaluating economic and environmental performance from a systems perspective. In order to address these challenges, a combination of applied and theoretical research is required.« less

  2. Ion-pair cloud-point extraction: a new method for the determination of water-soluble vitamins in plasma and urine.

    PubMed

    Heydari, Rouhollah; Elyasi, Najmeh S

    2014-10-01

    A novel, simple, and effective ion-pair cloud-point extraction coupled with a gradient high-performance liquid chromatography method was developed for determination of thiamine (vitamin B1 ), niacinamide (vitamin B3 ), pyridoxine (vitamin B6 ), and riboflavin (vitamin B2 ) in plasma and urine samples. The extraction and separation of vitamins were achieved based on an ion-pair formation approach between these ionizable analytes and 1-heptanesulfonic acid sodium salt as an ion-pairing agent. Influential variables on the ion-pair cloud-point extraction efficiency, such as the ion-pairing agent concentration, ionic strength, pH, volume of Triton X-100, extraction temperature, and incubation time have been fully evaluated and optimized. Water-soluble vitamins were successfully extracted by 1-heptanesulfonic acid sodium salt (0.2% w/v) as ion-pairing agent with Triton X-100 (4% w/v) as surfactant phase at 50°C for 10 min. The calibration curves showed good linearity (r(2) > 0.9916) and precision in the concentration ranges of 1-50 μg/mL for thiamine and niacinamide, 5-100 μg/mL for pyridoxine, and 0.5-20 μg/mL for riboflavin. The recoveries were in the range of 78.0-88.0% with relative standard deviations ranging from 6.2 to 8.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    NASA Astrophysics Data System (ADS)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The extracted data is returned as structured data (for instance a Python array) which can then be passed directly to local processing code. We will highlight how the libraries can be used by the community and integrated into existing systems, for instance by the use of Jupyter notebooks to share Python code examples which can then be used by other researchers as a basis for their own work.

  4. Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems

    NASA Technical Reports Server (NTRS)

    Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.

    2016-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.

  5. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  6. Roads Data Conflation Using Update High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Abdollahi, A.; Riyahi Bakhtiari, H. R.

    2017-11-01

    Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.

  7. [Study on extracting and separating curcuminoids from Curcuma longa rhizome using ultrasound strengthen by microemulsion].

    PubMed

    Yue, Chun-Hua; Zheng, Li-Tao; Guo, Qi-Ming; Li, Kun-Ping

    2014-05-01

    To establish a new method for the extraction and separation of curcuminoids from Curcuma longa rhizome by cloud-point preconcentration using microemulsions as solvent. The spectrophotometry was used to detect the solubility of curcumin in different oil phase, emulsifier and auxiliary emulsifier, and the microemulsion prescription was used for false three-phase figure optimization. The extraction process was optimized by uniform experiment design. The curcuminoids were separated from microemulsion extract by cloud-point preconcentration. Oil phase was oleic acid ethyl ester; Emulsifier was OP emulsifier; Auxiliary emulsifier was polyethylene glycol(peg) 400; The quantity of emulsifier to auxiliary emulsifier was the ratio of 5: 1; Microemulsion prescription was water-oleic acid ethyl ester-mixed emulsifier (0.45:0.1:0.45). The optimum extraction process was: time for 12.5 min, temperature of 52 degrees C, power of 360 W, frequency of 400 kHz, and the liquid-solid ratio of 40:1. The extraction rate of curcuminoids was 92.17% and 86.85% in microemulsion and oil phase, respectively. Curcuminoids is soluble in this microemulsion prescription with good extraction rate. This method is simple and suitable for curcuminoids extraction from Curcuma longa rhizome.

  8. Improvements for extending the time between maintenance periods for the Heidelberg ion beam therapy center (HIT) ion sources.

    PubMed

    Winkelmann, Tim; Cee, Rainer; Haberer, Thomas; Naas, Bernd; Peters, Andreas; Schreiner, Jochen

    2014-02-01

    The clinical operation at the Heidelberg Ion Beam Therapy Center (HIT) started in November 2009; since then more than 1600 patients have been treated. In a 24/7 operation scheme two 14.5 GHz electron cyclotron resonance ion sources are routinely used to produce protons and carbon ions. The modification of the low energy beam transport line and the integration of a third ion source into the therapy facility will be shown. In the last year we implemented a new extraction system at all three sources to enhance the lifetime of extraction parts and reduce preventive and corrective maintenance. The new four-electrode-design provides electron suppression as well as lower beam emittance. Unwanted beam sputtering effects which typically lead to contamination of the insulator ceramics and subsequent high-voltage break-downs are minimized by the beam guidance of the new extraction system. By this measure the service interval can be increased significantly. As a side effect, the beam emittance can be reduced allowing a less challenging working point for the ion sources without reducing the effective beam performance. This paper gives also an outlook to further enhancements at the HIT ion source testbench.

  9. Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface

    PubMed Central

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467

  10. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    PubMed

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  11. Digging deeper into noise. Reply to comment on "Extracting physics of life at the molecular level: A review of single-molecule data analyses"

    NASA Astrophysics Data System (ADS)

    Colomb, Warren; Sarkar, Susanta K.

    2015-06-01

    We would like to thank all the commentators for their constructive comments on our paper. Commentators agree that a proper analysis of noisy single-molecule data is important for extracting meaningful and accurate information about the system. We concur with their views and indeed, motivating an accurate analysis of experimental data is precisely the point of our paper. After a model about the system of interest is constructed based on the experimental single-molecule data, it is very helpful to simulate the model to generate theoretical single-molecule data and analyze exactly the same way. In our experience, such self-consistent approach involving experiments, simulations, and analyses often forces us to revise our model and make experimentally testable predictions. In light of comments from the commentators with different expertise, we would also like to point out that a single model should be able to connect different experimental techniques because the underlying science does not depend on the experimental techniques used. Wohland [1] has made a strong case for fluorescence correlation spectroscopy (FCS) as an important experimental technique to bridge single-molecule and ensemble experiments. FCS is a very powerful technique that can measure ensemble parameters with single-molecule sensitivity. Therefore, it is logical to simulate any proposed model and predict both single-molecule data and FCS data, and confirm with experimental data. Fitting the diffraction-limited point spread function (PSF) of an isolated fluorescent marker to localize a labeled biomolecule is a critical step in many single-molecule tracking experiments. Flyvbjerg et al. [2] have rigorously pointed out some important drawbacks of the prevalent practice of fitting diffraction-limited PSF with 2D Gaussian. As we try to achieve more accurate and precise localization of biomolecules, we need to consider subtle points as mentioned by Flyvbjerg et al. Shepherd [3] has mentioned specific examples of PSF that have been used for localization and has rightly mentioned the importance of detector noise in single-molecule localization. Meroz [4] has pointed out more clearly that the signal itself could be noisy and it is necessary to distinguish the noise of interest from the background noise. Krapf [5] has pointed out different origins of fluctuations in biomolecular systems and commented on their possible Gaussian and non-Gaussian nature. Importance of noise along with the possibility that the noise itself can be the signal of interest has been discussed in our paper [6]. However, Meroz [4] and Krapf [5] have provided specific examples to guide the readers in a better way. Sachs et al. [7] have discussed kinetic analysis in the presence of indistinguishable states and have pointed to the free software for the general kinetic analysis that originated from their research.

  12. Managing interoperability and complexity in health systems.

    PubMed

    Bouamrane, M-M; Tao, C; Sarkar, I N

    2015-01-01

    In recent years, we have witnessed substantial progress in the use of clinical informatics systems to support clinicians during episodes of care, manage specialised domain knowledge, perform complex clinical data analysis and improve the management of health organisations' resources. However, the vision of fully integrated health information eco-systems, which provide relevant information and useful knowledge at the point-of-care, remains elusive. This journal Focus Theme reviews some of the enduring challenges of interoperability and complexity in clinical informatics systems. Furthermore, a range of approaches are proposed in order to address, harness and resolve some of the many remaining issues towards a greater integration of health information systems and extraction of useful or new knowledge from heterogeneous electronic data repositories.

  13. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  14. Assessing the efficacy of citrus aurantifolia extract on smear layer removal with scanning electron microscope.

    PubMed

    Bolhari, Behnam; Sharifian, Mohammad Reza; Aminsobhani, Mohsen; Monsef Esfehani, Hamid Reza; Tavakolian, Pardis

    2012-01-01

    The purpose of this study was to determine the effects of citrus aurantifolia (CA) extract on smear layer removal in different parts of root canals. Thirty-nine single-rooted human teeth were randomly divided into three experimental (n=12) and one control (n=3) groups. Teeth were instrumented using MTwo rotary instruments. Root canals were irrigated with NaOCl during instrumentation. The canals in each group were irrigated with one of the following: completed mixture of citrus aurantifolia extracts, 17% EDTA, and alcoholic extract of CA. Distilled water was used for the control group. The irrigants were left within the canal for 20 minutes, and then rinsed with normal saline solution. Teeth were subsequently split longitudinally into 2 halves, and the canals were examined by a scanning electron-microscope. Cleanliness was evaluated using a five point scoring system. Statistical significant difference was found between groups (P<0.05). The smear layer was more effectively removed with 17% EDTA compared to alcoholic CA extract. However, they were both able to remove the smear layer in the coronal segment. Completed CA extract removed more smear layer in coronal and middle parts compared with the alcoholic extract (P=0.001); however, there was no significant difference in the apical part. Both of the alcoholic and completed mixtures of citrus aurantifolia extracts were not able to effectively remove smear layer compared with 17% EDTA during root canal therapy.

  15. Extraction of maxillary canines: Esthetic perceptions of patient smiles among dental professionals and laypeople.

    PubMed

    Thiruvenkatachari, Badri; Javidi, Hanieh; Griffiths, Sarah Elizabeth; Shah, Anwar A; Sandler, Jonathan

    2017-10-01

    Maxillary canines are generally considered important both cosmetically and functionally. Most claims on the importance of maxillary canines, however, have been based on expert opinions and clinician-based studies. There are no scientific studies in the literature reporting on their cosmetic importance or how laypeople perceive a smile treated by maxillary canine extractions. Our objective was to investigate whether there is any difference in the perceptions of patients' smiles treated by extracting either maxillary canines or first premolars, as judged by orthodontists, dentists, and laypeople. This retrospective study included 24 participants who had unilateral or bilateral extraction of maxillary permanent canines and fixed appliances in the maxillary and mandibular arches to comprehensively correct the malocclusion, selected from orthodontic patients treated at Chesterfield Royal Hospital NHS trust in the United Kingdom over the last 20 years. The control group of patients had extraction of maxillary first premolars followed by fixed appliances and finished to an extremely high standard judged by the requirement that they had been submitted for the Membership in Orthodontics examination. The finished Peer Assessment Rating scores for this group were less than 5. The end-of-treatment frontal extraoral smiling and frontal intraoral views were presented for both groups. The photographs were blinded for extraction choice and standardized for size and brightness using computer software (Adobe Photoshop CC version 14.0; Adobe Systems, San Jose, Calif). The work file was converted to an editable pdf file and e-mailed to the assessors. The assessor panel consisted of 30 members (10 orthodontists, 10 dentists, and 10 laypeople), who were purposely selected. The measures were rated on a 10-point Likert scale. The attractiveness ratings were not statistically significantly different between the canine extraction and premolar extraction groups, with a mean difference of 0.33 (SD, 0.29) points. A 1-way repeated-measures analysis of variance to test the difference in scores among the laypeople, orthodontists, and dentists (n = 30) showed no statistically significant difference (Wilks lambda = 0.835; P = 0.138), and the Bonferroni test indicated that no pair-wise difference was statistically significant. No statistically significant difference was found in the smile attractiveness between canine extraction and premolar extraction patients as assessed by general dentists, laypeople, and orthodontists. Further high-quality studies are required to evaluate the effect of canine extraction and premolar substitution on functional occlusion. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  16. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    PubMed

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  17. Extracting topographic structure from digital elevation data for geographic information-system analysis

    USGS Publications Warehouse

    Jenson, Susan K.; Domingue, Julia O.

    1988-01-01

    The first phase of analysis is a conditioning phase that generates three data sets: the original OEM with depressions filled, a data set indicating the flow direction for each cell, and a flow accumulation data set in which each cell receives a value equal to the total number of cells that drain to it. The original OEM and these three derivative data sets can then be processed in a variety of ways to optionally delineate drainage networks, overland paths, watersheds for userspecified locations, sub-watersheds for the major tributaries of a drainage network, or pour point linkages between watersheds. The computer-generated drainage lines and watershed polygons and the pour point linkage information can be transferred to vector-based geographic information systems for futher analysis. Comparisons between these computergenerated features and their manually delineated counterparts generally show close agreement, indicating that these software tools will save analyst time spent in manual interpretation and digitizing.

  18. Chromatic dispersive confocal technology for intra-oral scanning: first in-vitro results

    NASA Astrophysics Data System (ADS)

    Ertl, T.; Zint, M.; Konz, A.; Brauer, E.; Hörhold, H.; Hibst, R.

    2015-02-01

    Various test objects, plaster models, partially equipped with extracted teeth and pig jaws representing various clinical situations of tooth preparations were used for in-vitro scanning tests with an experimental intra-oral scanning system based on chromatic-dispersive confocal technology. Scanning results were compared against data sets of the same object captured by an industrial μCT measuring system. Compared to μCT data an average error of 18 - 30 μm was achieved for a single tooth scan area and less than 40 to 60 μm error measured over the restoration + the neighbor teeth and pontic areas up to 7 units. Mean error for a full jaw is within 100 - 140 μm. The length error for a 3 - 4 unit bridge situation form contact point to contact point is below 100 μm and excellent interproximal surface coverage and prep margin clarity was achieved.

  19. Large-area photogrammetry based testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.

  20. Automatic multiscale enhancement and segmentation of pulmonary vessels in CT pulmonary angiography images for CAD applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Chuan; Chan, H.-P.; Sahiner, Berkman

    2007-12-15

    The authors are developing a computerized pulmonary vessel segmentation method for a computer-aided pulmonary embolism (PE) detection system on computed tomographic pulmonary angiography (CTPA) images. Because PE only occurs inside pulmonary arteries, an automatic and accurate segmentation of the pulmonary vessels in 3D CTPA images is an essential step for the PE CAD system. To segment the pulmonary vessels within the lung, the lung regions are first extracted using expectation-maximization (EM) analysis and morphological operations. The authors developed a 3D multiscale filtering technique to enhance the pulmonary vascular structures based on the analysis of eigenvalues of the Hessian matrix atmore » multiple scales. A new response function of the filter was designed to enhance all vascular structures including the vessel bifurcations and suppress nonvessel structures such as the lymphoid tissues surrounding the vessels. An EM estimation is then used to segment the vascular structures by extracting the high response voxels at each scale. The vessel tree is finally reconstructed by integrating the segmented vessels at all scales based on a 'connected component' analysis. Two CTPA cases containing PEs were used to evaluate the performance of the system. One of these two cases also contained pleural effusion disease. Two experienced thoracic radiologists provided the gold standard of pulmonary vessels including both arteries and veins by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. The accuracy of vessel tree segmentation was evaluated by the percentage of the 'gold standard' vessel center points overlapping with the segmented vessels. The results show that 96.2% (2398/2494) and 96.3% (1910/1984) of the manually marked center points in the arteries overlapped with segmented vessels for the case without and with other lung diseases. For the manually marked center points in all vessels including arteries and veins, the segmentation accuracy are 97.0% (4546/4689) and 93.8% (4439/4732) for the cases without and with other lung diseases, respectively. Because of the lack of ground truth for the vessels, in addition to quantitative evaluation of the vessel segmentation performance, visual inspection was conducted to evaluate the segmentation. The results demonstrate that vessel segmentation using our method can extract the pulmonary vessels accurately and is not degraded by PE occlusion to the vessels in these test cases.« less

  1. Application of aqueous two-phase micellar system to improve extraction of adenoviral particles from cell lysate.

    PubMed

    Molino, João Vitor Dutra; Lopes, André Moreni; Viana Marques, Daniela de Araújo; Mazzola, Priscila Gava; da Silva, Joas Lucas; Hirata, Mario Hiroyuki; Hirata, Rosário Dominguez Crespo; Gatti, Maria Silvia Viccari; Pessoa, Adalberto

    2017-12-04

    Viral vectors are important in medical approaches, such as disease prevention and gene therapy, and their production depends on efficient prepurification steps. In the present study, an aqueous two-phase micellar system (ATPMS) was evaluated to extract human adenovirus type 5 particles from a cell lysate. Adenovirus was cultured in human embryonic kidney 293 (HEK-293) cells to a concentration of 1.4 × 10 10 particles/mL. Cells were lysed, and the system formed by direct addition of Triton X-114 in a 2 3 full factorial design with center points. The systems were formed with Triton X-114 at a final concentration of 1.0, 6.0, and 11.0% (w/w), cell lysate pH of 6.0, 6.5, and 7.0, and incubation temperatures at 33, 35, and 37 °C. Adenovirus particles recovered from partition phases were measured by qPCR. The best system condition was with 11.0% (w/w) of Triton X-114, a cell lysate pH of 7.0, and an incubation temperature at 33 °C, yielding 3.51 × 10 10 adenovirus particles/mL, which increased the initial adenovirus particles concentration by 2.3-fold, purifying it by 2.2-fold from the cell lysate, and removing cell debris. In conclusion, these results demonstrated that the use of an aqueous two-phase micellar system in the early steps of downstream processing could improve viral particle extraction from cultured cells while integrating clarification, concentration, and prepurification steps. © 2017 International Union of Biochemistry and Molecular Biology, Inc.

  2. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.

    2013-10-01

    This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).

  3. [Deconvolution of overlapped peaks in total ion chromatogram of essential oil from citri reticulatae pericarpium viride by automated mass spectral deconvolution & identification system].

    PubMed

    Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing

    2013-05-01

    This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.

  4. The Effect of Hydraulic Gradient and Pattern of Conduit Systems on Tracing Tests: Bench-Scale Modeling.

    PubMed

    Mohammadi, Zargham; Gharaat, Mohammad Javad; Field, Malcolm

    2018-03-13

    Tracer breakthrough curves provide valuable information about the traced media, especially in inherently heterogeneous karst aquifers. In order to study the effect of variations in hydraulic gradient and conduit systems on breakthrough curves, a bench scale karst model was constructed. The bench scale karst model contains both matrix and a conduit. Eight tracing tests were conducted under a wide range of hydraulic gradients from 1 to greater than 5 for branchwork and network-conduit systems. Sampling points at varying distances from the injection point were utilized. Results demonstrate that mean tracer velocities, tracer mass recovery and linear rising slope of the breakthrough curves were directly controlled by hydraulic gradient. As hydraulic gradient increased, both one half the time for peak concentration and one fifth the time for peak concentration decreased. The results demonstrate the variations in one half the time for peak concentration and one fifth the time for peak concentration of the descending limb for different sampling points under differing hydraulic gradients are mainly controlled by the interactions of advection with dispersion. The results are discussed from three perspectives: different conduit systems, different hydraulic-gradient conditions, and different sampling points. The research confirmed the undeniable role of hydrogeological setting (i.e., hydraulic gradient and conduit system) on the shape of the breakthrough curve. The extracted parameters (mobile-fluid velocity, tracer-mass recovery, linear rising limb, one half the time for peak concentration, and one fifth the time for peak concentration) allow for differentiating hydrogeological settings and enhance interpretations the tracing tests in karst aquifers. © 2018, National Ground Water Association.

  5. Microgap Evaluation of Novel Hydrophilic and Hydrophobic Obturating System: A Scanning Electron Microscope Study.

    PubMed

    Hegde, Vibha; Murkey, Laxmi Suresh

    2017-05-01

    The purpose of an endodontic obturation is to obtain a fluid tight hermetic seal of the entire root canal system. There has been an evolution of different materials and techniques to achieve this desired gap free fluid tight seal due to presence of anatomic complexity of the root canal system. To compare the microgap occurring in root canals obturated with hydrophilic versus hydrophobic systems using scanning electron microscope. Sixty extracted human single-rooted premolars were decoronated, instrumented using NiTi rotary instruments. The samples (n=20) were divided into three groups and obturated with Group A - (control group) gutta-percha with AH Plus, Group B - C-point with Smartpaste Bio and Group C - gutta-percha with guttaflow 2. The samples were split longitudinally into two halves and microgap was observed under scanning electron microscope in the apical 3 mm of the root canal. Group A (control) showed a mean difference of 8.54 as compared to 5.76 in group C. Group B showed the lowest mean difference of 0.83 suggesting that the hydrophilic system (C-point/Smartpaste Bio) produced least microgap as compared to the hydrophobic groups. Novel hydrophilic obturating system (C-points/ Smart-paste Bio) showed better seal and least microgap as compared to gutta-percha/guttaflow 2 and gutta-percha/ AH plus which showed gap at the sealer dentin interface due to less penetration and bonding of these hydrophobic obturating system.

  6. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  7. Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images

    PubMed Central

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539

  8. Change Detection Based on Persistent Scatterer Interferometry - a New Method of Monitoring Building Changes

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Kenduiywo, B. K.; Soergel, U.

    2016-06-01

    Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.

  9. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  10. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  11. Sasquatch Footprint Tool

    NASA Technical Reports Server (NTRS)

    Bledsoe, Kristin

    2013-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) is the parachute system for NASA s Orion spacecraft. The test program consists of numerous drop tests, wherein a test article rigged with parachutes is extracted or released from an aircraft. During such tests, range safety is paramount, as is the recoverability of the parachutes and test article. It is crucial to establish an aircraft release point that will ensure that the article and all items released from it will land in safe locations. A new footprint predictor tool, called Sasquatch, was created in MATLAB. This tool takes in a simulated trajectory for the test article, information about all released objects, and atmospheric wind data (simulated or actual) to calculate the trajectories of the released objects. Dispersions are applied to the landing locations of those objects, taking into account the variability of winds, aircraft release point, and object descent rate. Sasquatch establishes a payload release point (e.g., where the payload will be extracted from the carrier aircraft) that will ensure that the payload and all objects released from it will land in a specified cleared area. The landing locations (the final points in the trajectories) are plotted on a map of the test range. Sasquatch was originally designed for CPAS drop tests and includes extensive information about both the CPAS hardware and the primary test range used for CPAS testing. However, it can easily be adapted for more complex CPAS drop tests, other NASA projects, and commercial partners. CPAS has developed the Sasquatch footprint tool to ensure range safety during parachute drop tests. Sasquatch is well correlated to test data and continues to ensure the safety of test personnel as well as the safe recovery of all equipment. The tool will continue to be modified based on new test data, improving predictions and providing added capability to meet the requirements of more complex testing.

  12. Cloud point extraction of iron(III) and vanadium(V) using 8-quinolinol derivatives and Triton X-100 and determination of 10(-7)moldm(-3) level iron(III) in riverine water reference by a graphite furnace atomic absorption spectroscopy.

    PubMed

    Ohashi, Akira; Ito, Hiromi; Kanai, Chikako; Imura, Hisanori; Ohashi, Kousaburo

    2005-01-30

    The cloud point extraction behavior of iron(III) and vanadium(V) using 8-quinolinol derivatives (HA) such as 8-quinolinol (HQ), 2-methyl-8-quinolinol (HMQ), 5-butyloxymethyl-8-quinolinol (HO(4)Q), 5-hexyloxymethyl-8-quinolinol (HO(6)Q), and 2-methyl-5-octyloxymethyl-8-quinolinol (HMO(8)Q) and Triton X-100 solution was investigated. Iron(III) was extracted with HA and 4% (v/v) Triton X-100 in the pH range of 1.70-5.44. Above pH 4.0, more than 95% of iron(III) was extracted with HQ, HMQ, and HMO(8)Q. Vanadium(V) was also extracted with HA and 4% (v/v) Triton X-100 in the pH range of 2.07-5.00, and the extractability increased in the following order of HMQ < HQ < HO(4)Q < HO(6)Q. The cloud point extraction was applied to the determination of iron(III) in the riverine water reference by a graphite furnace atomic absorption spectroscopy. When 1.25 x 10(-3)M HMQ and 1% (v/v) Triton X-100 were used, the found values showed a good agreement with the certified ones within the 2% of the R.S.D. Moreover, the effect of an alkyl group on the solubility of 5-alkyloxymethyl-8-quinolinol and 2-methyl-5-alkyloxymethyl-8-quinolinol in 4% (v/v) Triton X-100 at 25 degrees C was also investigated.

  13. Design of a Customized Multipurpose Nano-Enabled Implantable System for In-Vivo Theranostics

    PubMed Central

    Juanola-Feliu, Esteve; Miribel-Català, Pere Ll.; Páez Avilés, Cristina; Colomer-Farrarons, Jordi; González-Piñero, Manel; Samitier, Josep

    2014-01-01

    The first part of this paper reviews the current development and key issues on implantable multi-sensor devices for in vivo theranostics. Afterwards, the authors propose an innovative biomedical multisensory system for in vivo biomarker monitoring that could be suitable for customized theranostics applications. At this point, findings suggest that cross-cutting Key Enabling Technologies (KETs) could improve the overall performance of the system given that the convergence of technologies in nanotechnology, biotechnology, micro&nanoelectronics and advanced materials permit the development of new medical devices of small dimensions, using biocompatible materials, and embedding reliable and targeted biosensors, high speed data communication, and even energy autonomy. Therefore, this article deals with new research and market challenges of implantable sensor devices, from the point of view of the pervasive system, and time-to-market. The remote clinical monitoring approach introduced in this paper could be based on an array of biosensors to extract information from the patient. A key contribution of the authors is that the general architecture introduced in this paper would require minor modifications for the final customized bio-implantable medical device. PMID:25325336

  14. Hybrid Automatic Building Interpretation System

    NASA Astrophysics Data System (ADS)

    Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.

    2011-09-01

    HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.

  15. Studies of flerovium and element 115 homologs with macrocyclic extractants

    NASA Astrophysics Data System (ADS)

    Despotopulos, John Dustin

    Study of the chemistry of the heaviest elements, Z ? 104, poses a unique challenge due to their low production cross-sections and short half-lives. Chemistry also must be studied on the one-atom-at-a-time scale, requiring automated, fast, and very efficient chemical schemes. Recent studies of the chemical behavior of copernicium (Cn, element 112) and flerovium (Fl, element 114) together with the discovery of isotopes of these elements with half-lives suitable for chemical studies have spurred a renewed interest in the development of rapid systems designed to study the chemical properties of elements with Z ≥ 114. This dissertation explores both extraction chromatography and solvent extraction as methods for development of a rapid chemical separation scheme for the homologs of flerovium (Pb, Sn, Hg) and element 115 (Bi, Sb), with the goal of developing a chemical scheme that, in the future, can be applied to on-line chemistry of both Fl and element 115. Macrocyclic extractants, specifically crown ethers and their derivatives, were chosen for these studies. Carrier-free radionuclides, used in these studies, of the homologs of Fl and element 115 were obtained by proton activation of high purity metal foils at the Lawrence Livermore National Laboratory (LLNL) Center for Accelerator Mass Spectrometry (CAMS): natIn(p,n)113Sn, natSn(p,n)124Sb, and Au(p,n)197m,gHg. The carrier-free activity was separated from the foils by novel separation schemes based on ion exchange and extraction chromatography techniques. Carrier-free Pb and Bi isotopes were obtained from development of a novel generator based on cation exchange chromatography using the 232U parent to generate 212Pb and 212Bi. Crown ethers show high selectivity for metal ions based on their size compared to the negatively charged cavity of the ether. Extraction by crown ethers occur based on electrostatic ion-dipole interactions between the negatively charged ring atoms (oxygen, sulfur, etc.) and the positively charged metal cations. Extraction chromatography resins produced by Eichrom Technologies, specifically the Pb resin based on di-t-byutlcyclohexano-18-crown-6, were chosen as a starting point for these studies. Simple chemical systems based solely on HCl matrices were explored to determine the extent of extraction for Pb, Sn and Hg on the resin. The kinetics and mechanism of extraction were also explored to determine suitability for a Fl chemical experiment. Systems based on KI/HCl and KI/HNO3 were explored for Bi and Sb. In both cases suitable separations, with high separation factors, were performed with vacuum flow columns containing the Pb-resin. Unfortunately the kinetics of uptake for Hg are far too slow on the traditional crown-ether to perform a Fl experiment and obtain whether or not Fl has true Hg-like character or not. However, the kinetics of Pb and Sn are more than sufficient for a Fl experiment to differentiate between Pb- or Sn-like character. To assess this kinetic issue a novel macrocyclic extractant based on sulfur donors was synthesized. Hexathia-18-crown-6, the sulfur analog of 18-crown-6, was synthesized based with by a template reaction using high dilution techniques. The replacement of oxygen ring atoms with sulfur should give the extractant a softer character, which should allow for far greater affinity toward soft metals such as Hg and Pb. From HCl matrices hexathia-18-crown-6 showed far greater kinetics and affinity for Hg than the Pb-resin; however, no affinity for Pb or Sn was seen. This presumably is due to the fact the charge density of sulfur crown ethers does not point to the center of the ring, and future synthesis of a substituted sulfur crown ether which forces the charge density to mimic that of the traditional crown ether should enable extraction of Pb and Sn to a greater extent than with the Pb-resin. Initial studies show promise for the separation of Bi and Sb from HCl matrices using hexathia-18-crown-6. Other macrocyclic extractants, including 2,2,2-cryptand, calix[6]arene and tetrathia-12-crown-4, were also investigated for comparison to the crown ethers. It was noted that these extractants are inferior compared to the crown and thiacrown ethers for extraction of Fl and element 115 homologs. A potential chemical system for Fl was established based on the Eichrom Pb resin, and insight to an improved system based on thiacrown ethers is presented.

  16. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  17. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  18. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  19. Reconstruction 3-dimensional image from 2-dimensional image of status optical coherence tomography (OCT) for analysis of changes in retinal thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinilhaq,; Widita, Rena

    2014-09-30

    Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less

  20. Terrestrial laser scanning for geometry extraction and change monitoring of rubble mound breakwaters

    NASA Astrophysics Data System (ADS)

    Puente, I.; Lindenbergh, R.; González-Jorge, H.; Arias, P.

    2014-05-01

    Rubble mound breakwaters are coastal defense structures that protect harbors and beaches from the impacts of both littoral drift and storm waves. They occasionally break, leading to catastrophic damage to surrounding human populations and resulting in huge economic and environmental losses. Ensuring their stability is considered to be of vital importance and the major reason for setting up breakwater monitoring systems. Terrestrial laser scanning has been recognized as a monitoring technique of existing infrastructures. Its capability for measuring large amounts of accurate points in a short period of time is also well proven. In this paper we first introduce a method for the automatic extraction of face geometry of concrete cubic blocks, as typically used in breakwaters. Point clouds are segmented based on their orientation and location. Then we compare corresponding cuboids of three co-registered point clouds to estimate their transformation parameters over time. The first method is demonstrated on scan data from the Baiona breakwater (Spain) while the change detection is demonstrated on repeated scan data of concrete bricks, where the changing scenario was simulated. The application of the presented methodology has verified its effectiveness for outlining the 3D breakwater units and analyzing their changes at the millimeter level. Breakwater management activities could benefit from this initial version of the method in order to improve their productivity.

  1. Designing and Implementation of Fuzzy Case-based Reasoning System on Android Platform Using Electronic Discharge Summary of Patients with Chronic Kidney Diseases

    PubMed Central

    Tahmasebian, Shahram; Langarizadeh, Mostafa; Ghazisaeidi, Marjan; Mahdavi-Mazdeh, Mitra

    2016-01-01

    Introduction: Case-based reasoning (CBR) systems are one of the effective methods to find the nearest solution to the current problems. These systems are used in various spheres as well as industry, business, and economy. The medical field is not an exception in this regard, and these systems are nowadays used in the various aspects of diagnosis and treatment. Methodology: In this study, the effective parameters were first extracted from the structured discharge summary prepared for patients with chronic kidney diseases based on data mining method. Then, through holding a meeting with experts in nephrology and using data mining methods, the weights of the parameters were extracted. Finally, fuzzy system has been employed in order to compare the similarities of current case and previous cases, and the system was implemented on the Android platform. Discussion: The data on electronic discharge records of patients with chronic kidney diseases were entered into the system. The measure of similarity was assessed using the algorithm provided in the system, and then compared with other known methods in CBR systems. Conclusion: Developing Clinical fuzzy CBR system used in Knowledge management framework for registering specific therapeutic methods, Knowledge sharing environment for experts in a specific domain and Powerful tools at the point of care. PMID:27708490

  2. A 'digital' technique for manual extraction of data from aerial photography

    NASA Technical Reports Server (NTRS)

    Istvan, L. B.; Bondy, M. T.

    1977-01-01

    The interpretation procedure described uses a grid cell approach. In addition, a random point is located in each cell. The procedure required that the cell/point grid be established on a base map, and identical grids be made to precisely match the scale of the photographic frames. The grid is then positioned on the photography by visual alignment to obvious features. Several alignments on one frame are sometimes required to make a precise match of all points to be interpreted. This system inherently corrects for distortions in the photography. Interpretation is then done cell by cell. In order to meet the time constraints, first order interpretation should be maintained. The data is put onto coding forms, along with other appropriate data, if desired. This 'digital' manual interpretation technique has proven to be efficient, and time and cost effective, while meeting strict requirements for data format and accuracy.

  3. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  4. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  5. The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality.

    PubMed

    Watanabe, Eiju; Satoh, Makoto; Konno, Takehiko; Hirai, Masahiro; Yamaguchi, Takashi

    2016-03-01

    The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  7. A sensitive analytical procedure for monitoring acrylamide in environmental water samples by offline SPE-UPLC/MS/MS.

    PubMed

    Togola, Anne; Coureau, Charlotte; Guezennec, Anne-Gwenaëlle; Touzé, Solène

    2015-05-01

    The presence of acrylamide in natural systems is of concern from both environmental and health points of view. We developed an accurate and robust analytical procedure (offline solid phase extraction combined with UPLC/MS/MS) with a limit of quantification (20 ng L(-1)) compatible with toxicity threshold values. The optimized (considering the nature of extraction phases, sampling volumes, and solvent of elution) solid phase extraction (SPE) was validated according to ISO Standard ISO/IEC 17025 on groundwater, surface water, and industrial process water samples. Acrylamide is highly polar, which induces a high variability during the SPE step, therefore requiring the use of C(13)-labeled acrylamide as an internal standard to guarantee the accuracy and robustness of the method (uncertainty about 25 % (k = 2) at limit of quantification level). The specificity of the method and the stability of acrylamide were studied for these environmental media, and it was shown that the method is suitable for measuring acrylamide in environmental studies.

  8. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  9. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.

    PubMed

    Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.

  10. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders

    PubMed Central

    Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724

  11. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  12. Supercritical-Fluid Extraction of Oil From Tar Sands

    NASA Technical Reports Server (NTRS)

    Compton, L. E.

    1982-01-01

    New supercritical solvent mixtures have been laboratory-tested for extraction of oil from tar sands. Mixture is circulated through sand at high pressure and at a temperature above critical point, dissolving organic matter into the compressed gas. Extract is recovered from sand residues. Low-temperature super-critical solvents reduce energy consumption and waste-disposal problems.

  13. Soxhlet Extraction of Caffeine from Beverage Plants

    NASA Astrophysics Data System (ADS)

    Adam, D. J.; Mainwaring, J.; Quigley, Michael N.

    1996-12-01

    A simple procedure is described for the extraction of caffeine from coffee beans or granules, tea leaves, mat leaves, etc. Since dichloromethane and several other hazardous substances are used, the procedure is best performed in a fume hood. Following extraction, melting point determination of the crystalline precipitate establishes its positive identity. Includes 33 references.

  14. Investigation of cloud point extraction for the analysis of metallic nanoparticles in a soil matrix

    PubMed Central

    Hadri, Hind El; Hackley, Vincent A.

    2017-01-01

    The characterization of manufactured nanoparticles (MNPs) in environmental samples is necessary to assess their behavior, fate and potential toxicity. Several techniques are available, but the limit of detection (LOD) is often too high for environmentally relevant concentrations. Therefore, pre-concentration of MNPs is an important component in the sample preparation step, in order to apply analytical tools with a LOD higher than the ng kg−1 level. The objective of this study was to explore cloud point extraction (CPE) as a viable method to pre-concentrate gold nanoparticles (AuNPs), as a model MNP, spiked into a soil extract matrix. To that end, different extraction conditions and surface coatings were evaluated in a simple matrix. The CPE method was then applied to soil extract samples spiked with AuNPs. Total gold, determined by inductively coupled plasma mass spectrometry (ICP-MS) following acid digestion, yielded a recovery greater than 90 %. The first known application of single particle ICP-MS and asymmetric flow field-flow fractionation to evaluate the preservation of the AuNP physical state following CPE extraction is demonstrated. PMID:28507763

  15. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  16. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  17. Innovations in the Analysis of Chandra-ACIS Observations

    NASA Astrophysics Data System (ADS)

    Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.

    2010-05-01

    As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.

  18. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  19. Zero Gravity Cryogenic Vent System Concepts for Upper Stages

    NASA Technical Reports Server (NTRS)

    Flachbart, Robin H.; Holt, James B.; Hastings, Leon J.

    2001-01-01

    The capability to vent in zero gravity without resettling is a technology need that involves practically all uses of sub-critical cryogenics in space, and would extend cryogenic orbital transfer vehicle capabilities. However, the lack of definition regarding liquid/ullage orientation coupled with the somewhat random nature of the thermal stratification and resulting pressure rise rates, lead to significant technical challenges. Typically a zero gravity vent concept, termed a thermodynamic vent system (TVS), consists of a tank mixer to destratify the propellant, combined with a Joule-Thomson (J-T) valve to extract thermal energy from the propellant. Marshall Space Flight Center's (MSFC's) Multipurpose Hydrogen Test Bed (MHTB) was used to test both spray-bar and axial jet TVS concepts. The axial jet system consists of a recirculation pump heat exchanger unit. The spray-bar system consists of a recirculation pump, a parallel flow concentric tube heat exchanger, and a spray-bar positioned close to the longitudinal axis of the tank. The operation of both concepts is similar. In the mixing mode, the recirculation pump withdraws liquid from the tank and sprays it into the tank liquid, ullage, and exposed tank surfaces. When energy extraction is required, a small portion of the recirculated liquid is passed sequentially through the J-T expansion valve, the heat exchanger, and is vented overboard. The vented vapor cools the circulated bulk fluid, thereby removing thermal energy and reducing tank pressure. The pump operates alone, cycling on and off, to destratify the tank liquid and ullage until the liquid vapor pressure reaches the lower set point. At that point, the J-T valve begins to cycle on and off with the pump. Thus, for short duration missions, only the mixer may operate, thus minimizing or even eliminating boil-off losses.

  20. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  1. An accurate on-site calibration system for electronic voltage transformers using a standard capacitor

    NASA Astrophysics Data System (ADS)

    Hu, Chen; Chen, Mian-zhou; Li, Hong-bin; Zhang, Zhu; Jiao, Yang; Shao, Haiming

    2018-05-01

    Ordinarily electronic voltage transformers (EVTs) are calibrated off-line and the calibration procedure requires complex switching operations, which will influence the reliability of the power grid and induce large economic losses. To overcome this problem, this paper investigates a 110 kV on-site calibration system for EVTs, including a standard channel, a calibrated channel and a PC equipped with the LabView environment. The standard channel employs a standard capacitor and an analogue integrating circuit to reconstruct the primary voltage signal. Moreover, an adaptive full-phase discrete Fourier transform (DFT) algorithm is proposed to extract electrical parameters. The algorithm involves the process of extracting the frequency of the grid, adjusting the operation points, and calculating the results using DFT. In addition, an insulated automatic lifting device is designed to realize the live connection of the standard capacitor, which is driven by a wireless remote controller. A performance test of the capacitor verifies the accurateness of the standard capacitor. A system calibration test shows that the system ratio error is less than 0.04% and the phase error is below 2‧, which meets the requirement of the 0.2 accuracy class. Finally, the developed calibration system was used in a substation, and the field test data validates the availability of the system.

  2. Octet baryon magnetic moments from lattice QCD: Approaching experiment from a three-flavor symmetric point

    DOE PAGES

    Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.; ...

    2017-06-23

    We used lattice QCD calculations with background magnetic fields to determine the magnetic moments of the octet baryons. Computations are performed at the physical value of the strange quark mass, and two values of the light quark mass, one corresponding to the SU(3) flavor-symmetric point, where the pion mass is m π ~ 800 MeV, and the other corresponding to a pion mass m π ~ 450 MeV. The moments are found to exhibit only mild pion-mass dependence when expressed in terms of appropriately chosen magneton units---the natural baryon magneton. This suggests that simple extrapolations can be used to determinemore » magnetic moments at the physical point, and extrapolated results are found to agree with experiment within uncertainties. A curious pattern is revealed among the anomalous baryon magnetic moments which is linked to the constituent quark model, however, careful scrutiny exposes additional features. Relations expected to hold in the large-N c limit of QCD are studied; and, in one case, the quark model prediction is significantly closer to the extracted values than the large-N c prediction. The magnetically coupled Λ-Σ 0 system is treated in detail at the SU(3) F point, with the lattice QCD results comparing favorably with predictions based on SU(3) F symmetry. Our analysis enables the first extraction of the isovector transition magnetic polarizability. The possibility that large magnetic fields stabilize strange matter is explored, but such a scenario is found to be unlikely.« less

  3. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  4. Liquid-liquid extraction of ethanol from aqueous solutions with amyl acetate, benzyl alcohol, and methyl isobutyl ketone at 298. 15. Kappa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solimo, H.N.; Martinez, H.E.; Riggio, R.

    1989-04-01

    Experimental mutual solubility and tie-line data were determined for three ternary liquid-liquid systems containing water, ethanol, and amyl acetate, benzyl alcohol, and methyl isobutyl ketone at 298.15{Kappa} in order to obtain their complete phase diagrams and to determine which is the most suitable solvent for extraction of ethanol from aqueous solutions. Tie lines were determined correlating the density of the binodal curve as a function of composition and the plait points using the Othmer and Tobias method. The experimental data were also correlated with the UNIFAC group contribution method. A qualitative agreement was obtained. Experimental results show that amyl acetatemore » is a better solvent than methyl isobutyl ketone and benzyl alcohol.« less

  5. Utilization of non-conventional systems for conversion of biomass to food components: Recovery optimization and characterizations of algal proteins and lipids

    NASA Technical Reports Server (NTRS)

    Karel, M.; Nakhost, Z.

    1986-01-01

    Protein isolate obtained from green algae (Scenedesmus obliquus) cultivated under controlled conditions was characterized. Molecular weight determination of fractionated algal proteins using SDS-polyacrylamide gel electrophoresis revealed a wide spectrum of molecular weights ranging from 15,000 to 220,000. Isoelectric points of dissociated proteins were in the range of 3.95 to 6.20. Amino acid composition of protein isolate compared favorably with FAO standards. High content of essential amino acids leucine, valine, phenylalanine and lysine makes algal protein isolate a high quality component of closed environment life support system (CELSS) diets. To optimize the removal of algal lipids and pigments supercritical carbon dioxide extraction (with and without ethanol as a co-solvent) was used. Addition of ethanol to supercritical CO2 resulted in more efficient removal of algal lipids and produced protein isolate with a good yield and protein recovery. The protein isolate extracted by the above mixture had an improved water solubility.

  6. Distributed Scene Analysis For Autonomous Road Vehicle Guidance

    NASA Astrophysics Data System (ADS)

    Mysliwetz, Birger D.; Dickmanns, E. D.

    1987-01-01

    An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision. Ongoing work described here is directed to improving the robustness of the road boundary detection process in the presence of shadows, ill-defined edges and other disturbing real world effects. The system structure and the techniques applied for real-time scene analysis are presented along with experimental results. All subfunctions of road boundary detection for vehicle guidance, such as edge extraction, feature aggregation and camera pointing control, are executed in parallel by an onboard multiprocessor system. On the image processing level local oriented edge extraction is performed in multiple 'windows', tighly controlled from a hierarchically higher, modelbased level. The interpretation process involving a geometric road model and the observer's relative position to the road boundaries is capable of coping with ambiguity in measurement data. By using only selected measurements to update the model parameters even high noise levels can be dealt with and misleading edges be rejected.

  7. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  8. Complementary experimental-simulational study of surfactant micellar phase in the extraction process of metallic ions: Effects of temperature and salt concentration

    NASA Astrophysics Data System (ADS)

    Soto-Ángeles, Alan Gustavo; Rodríguez-Hidalgo, María del Rosario; Soto-Figueroa, César; Vicente, Luis

    2018-02-01

    The thermoresponsive micellar phase behaviour that exhibits the Triton-X-100 micelles by temperature effect and addition of salt in the extraction process of metallic ions was explored from mesoscopic and experimental points. In the theoretical study, we analyse the formation of Triton-X-100 micelles, load and stabilization of dithizone molecules and metallic ions extraction inside the micellar core at room temperature; finally, a thermal analysis is presented. In the experimental study, the spectrophotometric outcomes confirm the solubility of the copper-dithizone complex in the micellar core, as well as the extraction of metallic ions of aqueous environment via a cloud-point at 332.2 K. The micellar solutions with salt present a low absorbance value compared with the micellar solutions without salt. The decrease in the absorbance value is attributed to a change in the size of hydrophobic region of colloidal micelles. All transitory stages of extraction process are discussed and analysed in this document.

  9. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  10. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  11. The critical distance in laser-induced plasmas: an operative definition

    NASA Astrophysics Data System (ADS)

    Delle Side, D.; Giuffreda, E.; Nassisi, V.

    2016-05-01

    We propose a method to estimate a precise value for the critical distance Lcr after which three-body recombination stops to produce charge losses in an expanding laser-induced plasma. We show in particular that the total charge collected has a ``reversed sigmoid'' shape as a function of the target-to-detector distance. Fitting the total charge data with a logistic related function, we could consider as Lcr the intercept of the tangent to this curve in its inflection point. Furthermore, this value scales well with theoretical predictions. From the application point of view, this could be of great practical interest, since it provide a reliable way to precisely determine the geometry of the extraction system in Laser Ion Sources.

  12. Improving the quality of extracting dynamics from interspike intervals via a resampling approach

    NASA Astrophysics Data System (ADS)

    Pavlova, O. N.; Pavlov, A. N.

    2018-04-01

    We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.

  13. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  14. Diabetic retinopathy grading by digital curvelet transform.

    PubMed

    Hajeb Mohammad Alipour, Shirin; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2012-01-01

    One of the major complications of diabetes is diabetic retinopathy. As manual analysis and diagnosis of large amount of images are time consuming, automatic detection and grading of diabetic retinopathy are desired. In this paper, we use fundus fluorescein angiography and color fundus images simultaneously, extract 6 features employing curvelet transform, and feed them to support vector machine in order to determine diabetic retinopathy severity stages. These features are area of blood vessels, area, regularity of foveal avascular zone, and the number of micro-aneurisms therein, total number of micro-aneurisms, and area of exudates. In order to extract exudates and vessels, we respectively modify curvelet coefficients of color fundus images and angiograms. The end points of extracted vessels in predefined region of interest based on optic disk are connected together to segment foveal avascular zone region. To extract micro-aneurisms from angiogram, first extracted vessels are subtracted from original image, and after removing detected background by morphological operators and enhancing bright small pixels, micro-aneurisms are detected. 70 patients were involved in this study to classify diabetic retinopathy into 3 groups, that is, (1) no diabetic retinopathy, (2) mild/moderate nonproliferative diabetic retinopathy, (3) severe nonproliferative/proliferative diabetic retinopathy, and our simulations show that the proposed system has sensitivity and specificity of 100% for grading.

  15. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    PubMed

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  16. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    PubMed Central

    Yang, Wei

    2018-01-01

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792

  17. Towards a “Sample-In, Answer-Out” Point-of-Care Platform for Nucleic Acid Extraction and Amplification: Using an HPV E6/E7 mRNA Model System

    PubMed Central

    Gulliksen, Anja; Keegan, Helen; Martin, Cara; O'Leary, John; Solli, Lars A.; Falang, Inger Marie; Grønn, Petter; Karlgård, Aina; Mielnik, Michal M.; Johansen, Ib-Rune; Tofteberg, Terje R.; Baier, Tobias; Gransee, Rainer; Drese, Klaus; Hansen-Hagge, Thomas; Riegger, Lutz; Koltay, Peter; Zengerle, Roland; Karlsen, Frank; Ausen, Dag; Furuberg, Liv

    2012-01-01

    The paper presents the development of a “proof-of-principle” hands-free and self-contained diagnostic platform for detection of human papillomavirus (HPV) E6/E7 mRNA in clinical specimens. The automated platform performs chip-based sample preconcentration, nucleic acid extraction, amplification, and real-time fluorescent detection with minimal user interfacing. It consists of two modular prototypes, one for sample preparation and one for amplification and detection; however, a common interface is available to facilitate later integration into one single module. Nucleic acid extracts (n = 28) from cervical cytology specimens extracted on the sample preparation chip were tested using the PreTect HPV-Proofer and achieved an overall detection rate for HPV across all dilutions of 50%–85.7%. A subset of 6 clinical samples extracted on the sample preparation chip module was chosen for complete validation on the NASBA chip module. For 4 of the samples, a 100% amplification for HPV 16 or 33 was obtained at the 1 : 10 dilution for microfluidic channels that filled correctly. The modules of a “sample-in, answer-out” diagnostic platform have been demonstrated from clinical sample input through sample preparation, amplification and final detection. PMID:22235204

  18. Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Nakamura, K.

    2016-06-01

    Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  19. Enhancement of basolateral amygdaloid neuronal dendritic arborization following Bacopa monniera extract treatment in adult rats.

    PubMed

    Vollala, Venkata Ramana; Upadhya, Subramanya; Nayak, Satheesha

    2011-01-01

    In the ancient Indian system of medicine, Ayurveda, Bacopa monniera is classified as Medhya rasayana, which includes medicinal plants that rejuvenate intellect and memory. Here, we investigated the effect of a standardized extract of Bacopa monniera on the dendritic morphology of neurons in the basolateral amygdala, a region that is concerned with learning and memory. The present study was conducted on 2½-month-old Wistar rats. The rats were divided into 2-, 4- and 6-week treatment groups. Rats in each of these groups were further divided into 20 mg/kg, 40 mg/kg and 80 mg/kg dose groups (n = 8 for each dose). After the treatment period, treated rats and age-matched control rats were subjected to spatial learning (T-maze) and passive avoidance tests. Subsequently, these rats were killed by decapitation, the brains were removed, and the amygdaloid neurons were impregnated with silver nitrate (Golgi staining). Basolateral amygdaloid neurons were traced using camera lucida, and dendritic branching points (a measure of dendritic arborization) and dendritic intersections (a measure of dendritic length) were quantified. These data were compared with the data from the age-matched control rats. The results showed an improvement in spatial learning performance and enhanced memory retention in rats treated with Bacopa monniera extract. Furthermore, a significant increase in dendritic length and the number of dendritic branching points was observed along the length of the dendrites of the basolateral amygdaloid neurons of rats treated with 40 mg/kg and 80 mg/kg of Bacopa monniera (BM) for longer periods of time (i.e., 4 and 6 weeks). We conclude that constituents present in Bacopa monniera extract have neuronal dendritic growth-stimulating properties.

  20. Enhancement of basolateral amygdaloid neuronal dendritic arborization following Bacopa monniera extract treatment in adult rats

    PubMed Central

    Vollala, Venkata Ramana; Upadhya, Subramanya; Nayak, Satheesha

    2011-01-01

    OBJECTIVE: In the ancient Indian system of medicine, Ayurveda, Bacopa monniera is classified as Medhya rasayana, which includes medicinal plants that rejuvenate intellect and memory. Here, we investigated the effect of a standardized extract of Bacopa monniera on the dendritic morphology of neurons in the basolateral amygdala, a region that is concerned with learning and memory. METHODS: The present study was conducted on 2½-month-old Wistar rats. The rats were divided into 2-, 4- and 6-week treatment groups. Rats in each of these groups were further divided into 20 mg/kg, 40 mg/kg and 80 mg/kg dose groups (n  =  8 for each dose). After the treatment period, treated rats and age-matched control rats were subjected to spatial learning (T-maze) and passive avoidance tests. Subsequently, these rats were killed by decapitation, the brains were removed, and the amygdaloid neurons were impregnated with silver nitrate (Golgi staining). Basolateral amygdaloid neurons were traced using camera lucida, and dendritic branching points (a measure of dendritic arborization) and dendritic intersections (a measure of dendritic length) were quantified. These data were compared with the data from the age-matched control rats. RESULTS: The results showed an improvement in spatial learning performance and enhanced memory retention in rats treated with Bacopa monniera extract. Furthermore, a significant increase in dendritic length and the number of dendritic branching points was observed along the length of the dendrites of the basolateral amygdaloid neurons of rats treated with 40 mg/kg and 80 mg/kg of Bacopa monniera (BM) for longer periods of time (i.e., 4 and 6 weeks). CONCLUSION: We conclude that constituents present in Bacopa monniera extract have neuronal dendritic growth-stimulating properties. PMID:21655763

  1. Images in quantum entanglement

    NASA Astrophysics Data System (ADS)

    Bowden, G. J.

    2009-08-01

    A system for classifying and quantifying entanglement in spin 1/2 pure states is presented based on simple images. From the image point of view, an entangled state can be described as a linear superposition of separable object wavefunction ΨO plus a portion of its own inverse image. Bell states can be defined in this way: \\Psi = 1/\\sqrt 2 (\\Psi _O \\pm \\Psi _I ). Using the method of images, the three-spin 1/2 system is discussed in some detail. This system can exhibit exclusive three-particle ν123 entanglement, two-particle entanglements ν12, ν13, ν23 and/or mixtures of all four. All four image states are orthogonal both to each other and to the object wavefunction. In general, five entanglement parameters ν12, ν13, ν23, ν123 and phi123 are required to define the general entangled state. In addition, it is shown that there is considerable scope for encoding numbers, at least from the classical point of view but using quantum-mechanical principles. Methods are developed for their extraction. It is shown that concurrence can be used to extract even-partite, but not odd-partite information. Additional relationships are also presented which can be helpful in the decoding process. However, in general, numerical methods are mandatory. A simple roulette method for decoding is presented and discussed. But it is shown that if the encoder chooses to use transcendental numbers for the angles defining the target function (α1, β1), etc, the method rapidly turns into the Devil's roulette, requiring finer and finer angular steps.

  2. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  3. Critical dynamics in population vaccinating behavior.

    PubMed

    Pananos, A Demetri; Bury, Thomas M; Wang, Clara; Schonfeld, Justin; Mohanty, Sharada P; Nyhan, Brendan; Salathé, Marcel; Bauch, Chris T

    2017-12-26

    Vaccine refusal can lead to renewed outbreaks of previously eliminated diseases and even delay global eradication. Vaccinating decisions exemplify a complex, coupled system where vaccinating behavior and disease dynamics influence one another. Such systems often exhibit critical phenomena-special dynamics close to a tipping point leading to a new dynamical regime. For instance, critical slowing down (declining rate of recovery from small perturbations) may emerge as a tipping point is approached. Here, we collected and geocoded tweets about measles-mumps-rubella vaccine and classified their sentiment using machine-learning algorithms. We also extracted data on measles-related Google searches. We find critical slowing down in the data at the level of California and the United States in the years before and after the 2014-2015 Disneyland, California measles outbreak. Critical slowing down starts growing appreciably several years before the Disneyland outbreak as vaccine uptake declines and the population approaches the tipping point. However, due to the adaptive nature of coupled behavior-disease systems, the population responds to the outbreak by moving away from the tipping point, causing "critical speeding up" whereby resilience to perturbations increases. A mathematical model of measles transmission and vaccine sentiment predicts the same qualitative patterns in the neighborhood of a tipping point to greatly reduced vaccine uptake and large epidemics. These results support the hypothesis that population vaccinating behavior near the disease elimination threshold is a critical phenomenon. Developing new analytical tools to detect these patterns in digital social data might help us identify populations at heightened risk of widespread vaccine refusal. Copyright © 2017 the Author(s). Published by PNAS.

  4. Critical dynamics in population vaccinating behavior

    PubMed Central

    Pananos, A. Demetri; Bury, Thomas M.; Wang, Clara; Schonfeld, Justin; Mohanty, Sharada P.; Nyhan, Brendan; Bauch, Chris T.

    2017-01-01

    Vaccine refusal can lead to renewed outbreaks of previously eliminated diseases and even delay global eradication. Vaccinating decisions exemplify a complex, coupled system where vaccinating behavior and disease dynamics influence one another. Such systems often exhibit critical phenomena—special dynamics close to a tipping point leading to a new dynamical regime. For instance, critical slowing down (declining rate of recovery from small perturbations) may emerge as a tipping point is approached. Here, we collected and geocoded tweets about measles–mumps–rubella vaccine and classified their sentiment using machine-learning algorithms. We also extracted data on measles-related Google searches. We find critical slowing down in the data at the level of California and the United States in the years before and after the 2014–2015 Disneyland, California measles outbreak. Critical slowing down starts growing appreciably several years before the Disneyland outbreak as vaccine uptake declines and the population approaches the tipping point. However, due to the adaptive nature of coupled behavior–disease systems, the population responds to the outbreak by moving away from the tipping point, causing “critical speeding up” whereby resilience to perturbations increases. A mathematical model of measles transmission and vaccine sentiment predicts the same qualitative patterns in the neighborhood of a tipping point to greatly reduced vaccine uptake and large epidemics. These results support the hypothesis that population vaccinating behavior near the disease elimination threshold is a critical phenomenon. Developing new analytical tools to detect these patterns in digital social data might help us identify populations at heightened risk of widespread vaccine refusal. PMID:29229821

  5. 40 CFR 435.60 - Applicability; description of the stripper subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Stripper... with recognized conservation practices. These facilities are engaged in production, and well treatment in the oil and gas extraction industry. ...

  6. Topical herbal therapies for treating osteoarthritis

    PubMed Central

    Cameron, Melainie; Chrubasik, Sigrun

    2014-01-01

    Background Before extraction and synthetic chemistry were invented, musculoskeletal complaints were treated with preparations from medicinal plants. They were either administered orally or topically. In contrast to the oral medicinal plant products, topicals act in part as counterirritants or are toxic when given orally. Objectives To update the previous Cochrane review of herbal therapy for osteoarthritis from 2000 by evaluating the evidence on effectiveness for topical medicinal plant products. Search methods Databases for mainstream and complementary medicine were searched using terms to include all forms of arthritis combined with medicinal plant products. We searched electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL),MEDLINE, EMBASE, AMED, CINAHL, ISI Web of Science, World Health Organization Clinical Trials Registry Platform) to February 2013, unrestricted by language. We also searched the reference lists from retrieved trials. Selection criteria Randomised controlled trials of herbal interventions used topically, compared with inert (placebo) or active controls, in people with osteoarthritis were included. Data collection and analysis Two review authors independently selected trials for inclusion, assessed the risk of bias of included studies and extracted data. Main results Seven studies (seven different medicinal plant interventions; 785 participants) were included. Single studies (five studies, six interventions) and non-comparable studies (two studies, one intervention) precluded pooling of results. Moderate evidence from a single study of 174 people with hand osteoarthritis indicated that treatment with Arnica extract gel probably results in similar benefits as treatment with ibuprofen (non-steroidal anti-inflammatory drug) with a similar number of adverse events. Mean pain in the ibuprofen group was 44.2 points on a 100 point scale; treatment with Arnica gel reduced the pain by 4 points after three weeks: mean difference (MD) −3.8 points (95% confidence intervals (CI) −10.1 to 2.5), absolute reduction 4% (10% reduction to 3% increase). Hand function was 7.5 points on a 30 point scale in the ibuprofen-treated group; treatment with Arnica gel reduced function by 0.4 points (MD −0.4, 95% CI −1.75 to 0.95), absolute improvement 1% (6% improvement to 3% decline)). Total adverse events were higher in the Arnica gel group (13% compared to 8% in the ibuprofen group): relative risk (RR) 1.65 (95% CI 0.72 to 3.76). Moderate quality evidence from a single trial of 99 people with knee osteoarthritis indicated that compared with placebo, Capsicum extract gel probably does not improve pain or knee function, and is commonly associated with treatment-related adverse events including skin irritation and a burning sensation. At four weeks follow-up, mean pain in the placebo group was 46 points on a 100 point scale; treatment with Capsicum extract reduced pain by 1 point (MD −1, 95%CI −6.8 to 4.8), absolute reduction of 1%(7%reduction to 5% increase). Mean knee function in the placebo group was 34.8 points on a 96 point scale at four weeks; treatment with Capsicum extract improved function by a mean of 2.6 points (MD −2.6, 95% CI −9.5 to 4.2), an absolute improvement of 3% (10% improvement to 4% decline). Adverse event rates were greater in the Capsicum extract group (80% compared with 20% in the placebo group, rate ratio 4.12, 95% CI 3.30 to 5.17). The number needed to treat to result in adverse events was 2 (95% CI 1 to 2). Moderate evidence from a single trial of 220 people with knee osteoarthritis suggested that comfrey extract gel probably improves pain without increasing adverse events. At three weeks, the mean pain in the placebo group was 83.5 points on a 100 point scale. Treatment with comfrey reduced pain by a mean of 41.5 points (MD −41.5, 95% CI −48 to −34), an absolute reduction of 42% (34% to 48% reduction). Function was not reported. Adverse events were similar: 6%(7/110) reported adverse events in the comfrey group compared with 14% (15/110) in the placebo group (RR 0.47, 95% CI 0.20 to 1.10). Although evidence from a single trial indicated that adhesive patches containing Chinese herbal mixtures FNZG and SJG may improve pain and function, the clinical applicability of these findings are uncertain because participants were only treated and followed up for seven days. We are also uncertain if other topical herbal products (Marhame-Mafasel compress, stinging nettle leaf) improve osteoarthritis symptoms due to the very low quality evidence from single trials. No serious side effects were reported. Authors’ conclusions Although the mechanism of action of the topical medicinal plant products provides a rationale basis for their use in the treatment of osteoarthritis, the quality and quantity of current research studies of effectiveness are insufficient. Arnica gel probably improves symptoms as effectively as a gel containing non-steroidal anti-inflammatory drug, but with no better (and possibly worse) adverse event profile. Comfrey extract gel probably improves pain, and Capsicum extract gel probably will not improve pain or function at the doses examined in this review. Further high quality, fully powered studies are required to confirm the trends of effectiveness identifed in studies so far. PMID:23728701

  7. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  8. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  9. The effect of seasonal variation on the performances of grid connected photovoltaic system in southern of Algeria

    NASA Astrophysics Data System (ADS)

    Zaghba, L.; Khennane, M.; Terki, N.; Borni, A.; Bouchakour, A.; Fezzani, A.; Mahamed, I. Hadj; Oudjana, S. H.

    2017-02-01

    This paper presents modeling, simulation, and analysis evaluation of the grid-connected PV generation system performance under MATLAB/Simulink. The objective is to study the effect of seasonal variation on the performances of grid connected photovoltaic system in southern of Algeria. This system works with a power converter. This converter allows the connection to the network and extracts maximum power from photovoltaic panels with the MPPT algorithm based on robust neuro-fuzzy sliding approach. The photovoltaic energy produced by the PV generator will be completely injected on the network. Simulation results show that the system controlled by the neuro-fuzzy sliding adapts to changing external disturbances and show their effectiveness not only for continued maximum power point but also for response time and stability.

  10. Formation and distribution of fragments in the spontaneous fission of 240 Pu

    DOE PAGES

    Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; ...

    2017-12-18

    We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations.

  11. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-11-20

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.

  12. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  13. [Study on effect of aqueous extracts from aconite on "dose-time-toxicity" relationships in mice hearts].

    PubMed

    Feng, Qun; Li, Xiao-yu; Luan, Yong-fu; Sun, Sai-nan; Sun, Rong

    2015-03-01

    To study the effect of single administration of aqueous extracts from aconite on "dose-toxicity" relationship and "time-toxicity" relationship of mice hearts, through changes in electrocardiogram (ECG) and serum biochemical indexes. Mice were grouped according to different drug doses and time points, and orally administered with water extracts from aconite for once to observe the changes of mice ECG before and after the administration, calculate visceral indexes heart, liver and kidney, and detect levels of CK, LDH, BNP and CTn-I in serum. According to the "time-toxicity" relationship study, at 5 min after oral administration with aqueous extracts from aconite in mice, the heart rate of mice began rising, reached peak at 60 min and then slowly reduced; QRS, R amplitude, T duration and amplitude and QT interval declined at 5 min, reduced to the bottom at 60 min and then gradually elevated. The levels of CK, LDH, BNP and CTn-I in serum elevated at 5 min and reached the peak at 60 min, with no significant change in ratios of organs to body at different time points. On the basis of the "dose-toxicity" relationship, with the increase in single dose of aqueous extracts from aconite, the heart rate of mice. QRS, T duration and amplitude and QT interval declined gradually, and levels of CK, LDH, BNP and CTn-I in serum slowly elevated, with a certain dose dependence and no significant change in ratios of organs to body in mice. Single oral administration of different doses of aqueous extracts from aconite could cause different degrees of heart injury at different time points, with a certain dose dependence. Its peak time of toxicity is at 60 min after the administration of aqueous extracts from aconite.

  14. Extracting Exact Answers to Questions Based on Structural Links

    DTIC Science & Technology

    2002-01-01

    type of asking point and answer point (e.g. NePerson asking point matches NePerson and its sub-types NeMan and NeWoman; ‘how’ matches manner-modifier...NePerson V-S win [John Smith]/ NeMan Some sample results are given in section 4 to illustrate how answer-points are identified based on matching binary

  15. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: A multivariate study

    NASA Astrophysics Data System (ADS)

    Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-01

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.

  16. Self-similar semi-analytical RMHD jet model: first steps towards a more comprehensive jet modelling for data fitting

    NASA Astrophysics Data System (ADS)

    Markoff, Sera; Ceccobello, Chiara; Heemskerk, Martin; Cavecchi, Yuri; Polko, Peter; Meier, David

    2017-08-01

    Jets are ubiquitous and reveal themselves at different scales and redshifts, showing an extreme diversity in energetics, shapes and emission. Indeed jets are found to be characteristic features of black hole systems, such as X-ray binaries (XRBs) and active galactic nuclei (AGN), as well as of young stellar objects (YSOs) and gamma-ray bursts (GRBs). Observations suggest that jets are an energetically important component of the system that hosts them, because the jet power appears to be comparable to the accretion power. Significant evidence has been found of the impact of jets not only in the immediate proximity of the central object, but as well on their surrounding environment, where they deposit the energy extracted from the accretion flow. Moreover, the inflow/outflow system produces radiation over the entire electromagnetic spectrum, from radio to X-rays. Therefore it is a compelling problem to be solved and deeply understood. I present a new integration scheme to solve radial self-similar, stationary, axisymmetric relativistic magneto-hydro-dynamics (MHD) equations describing collimated, relativistic outflows crossing smoothly all the singular points (the Alfvén point and the modified slow/fast points). For the first time, the integration can be performed all the way from the disk mid-plane to downstream of the modified fast point. I will discuss an ensemble of jet solutions showing diverse jet dynamics (jet Lorentz factor ~ 1-10) and geometric properties (i.e. shock height ~ 103 - 107 gravitational radii), which makes our model suitable for application to many different systems where a relativistic jet is launched.

  17. Uav Photogrammetry: Block Triangulation Comparisons

    NASA Astrophysics Data System (ADS)

    Gini, R.; Pagliari, D.; Passoni, D.; Pinto, L.; Sona, G.; Dosso, P.

    2013-08-01

    UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord"), useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey), allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes) were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma), Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

  18. Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.

    PubMed

    Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T

    2008-09-15

    Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.

  19. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  20. Multi-Scale Voxel Segmentation for Terrestrial Lidar Data within Marshes

    NASA Astrophysics Data System (ADS)

    Nguyen, C. T.; Starek, M. J.; Tissot, P.; Gibeaut, J. C.

    2016-12-01

    The resilience of marshes to a rising sea is dependent on their elevation response. Terrestrial laser scanning (TLS) is a detailed topographic approach for accurate, dense surface measurement with high potential for monitoring of marsh surface elevation response. The dense point cloud provides a 3D representation of the surface, which includes both terrain and non-terrain objects. Extraction of topographic information requires filtering of the data into like-groups or classes, therefore, methods must be incorporated to identify structure in the data prior to creation of an end product. A voxel representation of three-dimensional space provides quantitative visualization and analysis for pattern recognition. The objectives of this study are threefold: 1) apply a multi-scale voxel approach to effectively extract geometric features from the TLS point cloud data, 2) investigate the utility of K-means and Self Organizing Map (SOM) clustering algorithms for segmentation, and 3) utilize a variety of validity indices to measure the quality of the result. TLS data were collected at a marsh site along the central Texas Gulf Coast using a Riegl VZ 400 TLS. The site consists of both exposed and vegetated surface regions. To characterize structure of the point cloud, octree segmentation is applied to create a tree data structure of voxels containing the points. The flexibility of voxels in size and point density makes this algorithm a promising candidate to locally extract statistical and geometric features of the terrain including surface normal and curvature. The characteristics of the voxel itself such as the volume and point density are also computed and assigned to each point as are laser pulse characteristics. The features extracted from the voxelization are then used as input for clustering of the points using the K-means and SOM clustering algorithms. Optimal number of clusters are then determined based on evaluation of cluster separability criterions. Results for different combinations of the feature space vector and differences between K-means and SOM clustering will be presented. The developed method provides a novel approach for compressing TLS scene complexity in marshes, such as for vegetation biomass studies or erosion monitoring.

  1. A method for the solvent extraction of low-boiling-point plant volatiles.

    PubMed

    Xu, Ning; Gruber, Margaret; Westcott, Neil; Soroka, Julie; Parkin, Isobel; Hegedus, Dwayne

    2005-01-01

    A new method has been developed for the extraction of volatiles from plant materials and tested on seedling tissue and mature leaves of Arabidopsis thaliana, pine needles and commercial mixtures of plant volatiles. Volatiles were extracted with n-pentane and then subjected to quick distillation at a moderate temperature. Under these conditions, compounds such as pigments, waxes and non-volatile compounds remained undistilled, while short-chain volatile compounds were distilled into a receiving flask using a high-efficiency condenser. Removal of the n-pentane and concentration of the volatiles in the receiving flask was carried out using a Vigreux column condenser prior to GC-MS. The method is ideal for the rapid extraction of low-boiling-point volatiles from small amounts of plant material, such as is required when conducting metabolic profiling or defining biological properties of volatile components from large numbers of mutant lines.

  2. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    PubMed

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  3. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  4. Automated Techniques for Quantification of Coastline Change Rates using Landsat Imagery along Caofeidian, China

    NASA Astrophysics Data System (ADS)

    Dong, Di; Li, Ziwei; Liu, Zhaoqin; Yu, Yang

    2014-03-01

    This paper focuses on automated extraction and monitoring of coastlines by remote sensing techniques using multi-temporal Landsat imagery along Caofeidian, China. Caofeidian, as one of the active economic regions in China, has experienced dramatic change due to enhanced human activities, such as land reclamation. These processes have caused morphological changes of the Caofeidian shoreline. In this study, shoreline extraction and change analysis are researched. An algorithm based on image texture and mathematical morphology is proposed to automate coastline extraction. We tested this approach and found that it's capable of extracting coastlines from TM and ETM+ images with little human modifications. Then, the detected coastline vectors are imported into Arcgis software, and the Digital Shoreline Analysis System (DSAS) is used to calculate the change rate (the end point rate and linear regression rate). The results show that in some parts of the research area, remarkable coastline changes are observed, especially the accretion rate. The abnormal accretion is mostly attributed to the large-scale land reclamation during 2003 and 2004 in Caofeidian. So we can conclude that various construction projects, especially the land reclamation project, have made Caofeidian shorelines change greatly, far above the normal.

  5. An MR-based Model for Cardio-Respiratory Motion Compensation of Overlays in X-Ray Fluoroscopy

    PubMed Central

    Fischer, Peter; Faranesh, Anthony; Pohl, Thomas; Maier, Andreas; Rogers, Toby; Ratnayaka, Kanishka; Lederman, Robert; Hornegger, Joachim

    2017-01-01

    In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time MR imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 mm to 2.75 mm in MR and from 3.0 mm to 1.8 mm in X-ray compared to the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos. PMID:28692969

  6. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction

    PubMed Central

    Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800

  7. Fpga based L-band pulse doppler radar design and implementation

    NASA Astrophysics Data System (ADS)

    Savci, Kubilay

    As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed point arithmetic operations as it is fast and facilitates source requirement as it consumes less hardware than floating point arithmetic operations. The software uses floating point arithmetic operations, which ensure precision in processing at the expense of speed. The functionality of the radar system has been tested for experimental validation in the field with a moving car and the validation of submodules are tested with synthetic data simulated on MATLAB.

  8. An inverter/controller subsystem optimized for photovoltaic applications

    NASA Technical Reports Server (NTRS)

    Pickrell, R. L.; Osullivan, G.; Merrill, W. C.

    1978-01-01

    Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. Optimization of the inverter/controller design is discussed as part of an overall photovoltaic power system designed for maximum energy extraction from the solar array. The special design requirements for the inverter/ controller include: a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy.

  9. Automatic extraction of plots from geo-registered UAS imagery of crop fields with complex planting schemes

    NASA Astrophysics Data System (ADS)

    Hearst, Anthony A.

    Complex planting schemes are common in experimental crop fields and can make it difficult to extract plots of interest from high-resolution imagery of the fields gathered by Unmanned Aircraft Systems (UAS). This prevents UAS imagery from being applied in High-Throughput Precision Phenotyping and other areas of agricultural research. If the imagery is accurately geo-registered, then it may be possible to extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. Future work will focus on further enhancing the plot extraction accuracy through additional image processing techniques so that it becomes sufficiently accurate for all practical purposes in agricultural research and potentially other areas of research.

  10. Normalization of relative and incomplete temporal expressions in clinical narratives.

    PubMed

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2015-09-01

    To improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives. We analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component. The annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge. Experiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect. We formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  12. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    PubMed

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-11-03

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  13. Chaotic behavior of renal sympathetic nerve activity: effect of baroreceptor denervation and cardiac failure.

    PubMed

    DiBona, G F; Jones, S Y; Sawin, L L

    2000-09-01

    Nonlinear dynamic analysis was used to examine the chaotic behavior of renal sympathetic nerve activity in conscious rats subjected to either complete baroreceptor denervation (sinoaortic and cardiac baroreceptor denervation) or induction of congestive heart failure (CHF). The peak interval sequence of synchronized renal sympathetic nerve discharge was extracted and used for analysis. In control rats, this yielded a system whose correlation dimension converged to a low value over the embedding dimension range of 10-15 and whose greatest Lyapunov exponent was positive. Complete baroreceptor denervation was associated with a decrease in the correlation dimension of the system (before 2.65 +/- 0.27, after 1.64 +/- 0.17; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.201 +/- 0.008 bits/data point before, 0.177 +/- 0.004 bits/data point after, P < 0.02). CHF, a state characterized by impaired sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, was associated with a similar decrease in the correlation dimension (control 3.41 +/- 0.23, CHF 2.62 +/- 0.26; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.205 +/- 0.048 bits/data point control, 0.136 +/- 0.033 bits/data point CHF, P < 0.02). These results indicate that removal of sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, occurring either physiologically or pathophysiologically, is associated with a decrease in the correlation dimensions of the system and a reduction in chaotic behavior.

  14. Estimation of Local Orientations in Fibrous Structures With Applications to the Purkinje System

    PubMed Central

    Plank, Gernot; Trayanova, Natalia A.; Vidal, René

    2011-01-01

    The extraction of the cardiac Purkinje system (PS) from intensity images is a critical step toward the development of realistic structural models of the heart. Such models are important for uncovering the mechanisms of cardiac disease and improving its treatment and prevention. Unfortunately, the manual extraction of the PS is a challenging and error-prone task due to the presence of image noise and numerous fiber junctions. To deal with these challenges, we propose a framework that estimates local fiber orientations with high accuracy and reconstructs the fibers via tracking. Our key contribution is the development of a descriptor for estimating the orientation distribution function (ODF), a spherical function encoding the local geometry of the fibers at a point of interest. The fiber/branch orientations are identified as the modes of the ODFs via spherical clustering and guide the extraction of the fiber centerlines. Experiments on synthetic data evaluate the sensitivity of our approach to image noise, width of the fiber, and choice of the mode detection strategy, and show its superior performance compared to those of the existing descriptors. Experiments on the free-running PS in an MR image also demonstrate the accuracy of our method in reconstructing such sparse fibrous structures. PMID:21335301

  15. Smart container UWB sensor system for situational awareness of intrusion alarms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Carlos E.; Haugen, Peter C.; Zumstein, James M.

    An in-container monitoring sensor system is based on an UWB radar intrusion detector positioned in a container and having a range gate set to the farthest wall of the container from the detector. Multipath reflections within the container make every point on or in the container appear to be at the range gate, allowing intrusion detection anywhere in the container. The system also includes other sensors to provide false alarm discrimination, and may include other sensors to monitor other parameters, e.g. radiation. The sensor system also includes a control subsystem for controlling system operation. Communications and information extraction capability maymore » also be included. A method of detecting intrusion into a container uses UWB radar, and may also include false alarm discrimination. A secure container has an UWB based monitoring system« less

  16. Strong Coupling Corrections in Quantum Thermodynamics

    NASA Astrophysics Data System (ADS)

    Perarnau-Llobet, M.; Wilming, H.; Riera, A.; Gallego, R.; Eisert, J.

    2018-03-01

    Quantum systems strongly coupled to many-body systems equilibrate to the reduced state of a global thermal state, deviating from the local thermal state of the system as it occurs in the weak-coupling limit. Taking this insight as a starting point, we study the thermodynamics of systems strongly coupled to thermal baths. First, we provide strong-coupling corrections to the second law applicable to general systems in three of its different readings: As a statement of maximal extractable work, on heat dissipation, and bound to the Carnot efficiency. These corrections become relevant for small quantum systems and vanish in first order in the interaction strength. We then move to the question of power of heat engines, obtaining a bound on the power enhancement due to strong coupling. Our results are exemplified on the paradigmatic non-Markovian quantum Brownian motion.

  17. 3D Power Line Extraction from Multiple Aerial Images.

    PubMed

    Oh, Jaehong; Lee, Changno

    2017-09-29

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.

  18. 3D Power Line Extraction from Multiple Aerial Images

    PubMed Central

    Lee, Changno

    2017-01-01

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204

  19. Effect of two doses of ginkgo biloba extract (EGb 761) on the dual-coding test in elderly subjects.

    PubMed

    Allain, H; Raoul, P; Lieury, A; LeCoz, F; Gandon, J M; d'Arbigny, P

    1993-01-01

    The subjects of this double-blind study were 18 elderly men and women (mean age, 69.3 years) with slight age-related memory impairment. In a crossover-study design, each subject received placebo or an extract of Ginkgo biloba (EGb 761) (320 mg or 600 mg) 1 hour before performing a dual-coding test that measures the speed of information processing; the test consists of several coding series of drawings and words presented at decreasing times of 1920, 960, 480, 240, and 120 ms. The dual-coding phenomenon (a break point between coding verbal material and images) was demonstrated in all the tests. After placebo, the break point was observed at 960 ms and dual coding beginning at 1920 ms. After each dose of the ginkgo extract, the break point (at 480 ms) and dual coding (at 960 ms) were significantly shifted toward a shorter presentation time, indicating an improvement in the speed of information processing.

  20. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples

    NASA Astrophysics Data System (ADS)

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-01

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.

  1. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples.

    PubMed

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-05

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60mg/L, 0.10-0.80mg/L, and 0.03-0.30mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1μg/L. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium

    PubMed Central

    Rusinek, Cory A.; Bange, Adam; Papautsky, Ian; Heineman, William R.

    2016-01-01

    Cloud point extraction (CPE) is a well-established technique for the pre-concentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-Vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd2+) by anodic stripping voltammetry (ASV) as a representative example. Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd2+ to form an extractable ion pair. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22–25° C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd2+ of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. Comparison of ASV analysis without CPE was also investigated and a 20x decrease (4.0 ppb) in the detection limit was observed. The suitability of this procedure for the analysis of tap and river water samples was also demonstrated. This simple, versatile, environmentally friendly and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods. PMID:25996561

  3. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  4. Multioriented and curved text lines extraction from Indian documents.

    PubMed

    Pal, U; Roy, Partha Pratim

    2004-08-01

    There are printed artistic documents where text lines of a single page may not be parallel to each other. These text lines may have different orientations or the text lines may be curved shapes. For the optical character recognition (OCR) of these documents, we need to extract such lines properly. In this paper, we propose a novel scheme, mainly based on the concept of water reservoir analogy, to extract individual text lines from printed Indian documents containing multioriented and/or curve text lines. A reservoir is a metaphor to illustrate the cavity region of a character where water can be stored. In the proposed scheme, at first, connected components are labeled and identified either as isolated or touching. Next, each touching component is classified either straight type (S-type) or curve type (C-type), depending on the reservoir base-area and envelope points of the component. Based on the type (S-type or C-type) of a component two candidate points are computed from each touching component. Finally, candidate regions (neighborhoods of the candidate points) of the candidate points of each component are detected and after analyzing these candidate regions, components are grouped to get individual text lines.

  5. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  6. 40 CFR 439.21 - Special definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Extraction Products § 439.21 Special definitions. For the purpose of this subpart: (a) Extraction means process operations that derive pharmaceutically active ingredients from natural sources such as plant roots and leaves, animal glands, and...

  7. 40 CFR 439.21 - Special definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Extraction Products § 439.21 Special definitions. For the purpose of this subpart: (a) Extraction means process operations that derive pharmaceutically active ingredients from natural sources such as plant roots and leaves, animal glands, and...

  8. Highway extraction from high resolution aerial photography using a geometric active contour model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.

  9. Optimization of cloud point extraction and solid phase extraction methods for speciation of arsenic in natural water using multivariate technique.

    PubMed

    Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira

    2009-09-28

    The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.

  10. Assessment of critical-fluid extractions in the process industries

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The potential for critical-fluid extraction as a separation process for improving the productive use of energy in the process industries is assessed. Critical-fluid extraction involves the use of fluids, normally gaseous at ambient conditions, as extraction solvents at temperatures and pressures around the critical point. Equilibrium and kinetic properties in this regime are very favorable for solvent applications, and generally allow major reductions in the energy requirements for separating and purifying chemical component of a mixture.

  11. Appending High-Resolution Elevation Data to GPS Speed Traces for Vehicle Energy Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, E.; Burton, E.; Duran, A.

    Accurate and reliable global positioning system (GPS)-based vehicle use data are highly valuable for many transportation, analysis, and automotive considerations. Model-based design, real-world fuel economy analysis, and the growing field of autonomous and connected technologies (including predictive powertrain control and self-driving cars) all have a vested interest in high-fidelity estimation of powertrain loads and vehicle usage profiles. Unfortunately, road grade can be a difficult property to extract from GPS data with consistency. In this report, we present a methodology for appending high-resolution elevation data to GPS speed traces via a static digital elevation model. Anomalous data points in the digitalmore » elevation model are addressed during a filtration/smoothing routine, resulting in an elevation profile that can be used to calculate road grade. This process is evaluated against a large, commercially available height/slope dataset from the Navteq/Nokia/HERE Advanced Driver Assistance Systems product. Results will show good agreement with the Advanced Driver Assistance Systems data in the ability to estimate road grade between any two consecutive points in the contiguous United States.« less

  12. A dimension-wise analysis method for the structural-acoustic system with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong

    2017-04-01

    The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.

  13. Profile parameters of wheelset detection for high speed freight train

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Ma, Li; Gao, Xiaorong; Wang, Li

    2012-04-01

    Because of freight train, in China, transports goods on railway freight line throughout the country, it does not depart from or return to engine shed during a long phase, thus we cannot monitor the quality of wheel set effectively. This paper provides a system which uses leaser and high speed camera, applies no-contact light section technology to get precise wheel set profile parameters. The paper employs clamping-track method to avoid complex railway ballast modification project. And detailed descript an improved image-tracking algorithm to extract central line from profile curve. For getting one pixel width and continuous line of the profile curve, uses local gray maximum points as direction control points to direct tracking direction. The results based on practical experiment show the system adapted to detection environment of high speed and high vibration, and it can effectively detect the wheelset geometric parameters with high accuracy. The system fills the gaps in wheel set detection for freight train in main line and has an enlightening function on monitoring the quality of wheel set.

  14. Genotoxicity assessments of alluvial soil irrigated with wastewater from a pesticide manufacturing industry.

    PubMed

    Anjum, Reshma; Krakat, Niclas

    2015-10-01

    In this study, organochlorine pesticides (OCP) and heavy metals were analyzed from wastewater- and groundwater- irrigated soils (control samples) by gas chromatography (GC) and atomic absorption spectrophotometry (AAS), respectively. Gas chromatographic analysis revealed the presence of high concentration of pesticides in soil irrigated with wastewater (WWS). These concentrations were far above the maximum residue permissible limits indicating that alluvial soils have high binding capacity of OCP. AAS analyses revealed higher concentration of heavy metals in WWS as compared to groundwater (GWS). Also, the DNA repair (SOS)-defective Escherichia coli K-12 mutant assay and the bacteriophage lambda system were employed to estimate the genotoxicity of soils. Therefore, soil samples were extracted by hexane, acetonitrile, methanol, chloroform, and acetone. Both bioassays revealed that hexane-extracted soils from WWS were most genotoxic. A maximum survival of 15.2% and decline of colony-forming units (CFUs) was observed in polA mutants of DNA repair-defective E. coli K-12 strains when hexane was used as solvent. However, the damage of polA (-) mutants triggered by acetonitrile, methanol, chloroform, and acetone extracts was 80.0, 69.8, 65.0, and 60.7%, respectively. These results were also confirmed by the bacteriophage λ test system as hexane extracts of WWS exhibited a maximum decline of plaque-forming units for lexA mutants of E. coli K-12 pointing to an elevated genotoxic potential. The lowest survival was observed for lexA (12%) treated with hexane extracts while the percentage of survival was 25, 49.2, 55, and 78% with acetonitrile, methanol, chloroform, and acetone, respectively, after 6 h of treatment. Thus, our results suggest that agricultural soils irrigated with wastewater from pesticide industries have a notably high genotoxic potential.

  15. Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach

    PubMed Central

    2012-01-01

    Background Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. Methods We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. Results We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. Conclusions We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data. PMID:22759462

  16. Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach.

    PubMed

    Ratkovic, Zorana; Golik, Wiktoria; Warnier, Pierre

    2012-06-26

    Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data.

  17. Construction of a cardiac conduction system subject to extracellular stimulation.

    PubMed

    Clements, Clyde; Vigmond, Edward

    2005-01-01

    Proper electrical excitation of the heart is dependent on the specialized conduction system that coordinates the electrical activity from the atria to the ventricles. This paper describes the construction of a conduction system as a branching network of Purkinje fibers on the endocardial surface. Endocardial surfaces were extracted from an FEM model of the ventricles and transformed to 2D. A Purkinje network was drawn on top and the inverse transform performed. The underlying mathematics utilized one dimensional cubic Hermite finite elements. Compared to linear elements, the cubic Hermite solution was found to have a much smaller RMS error. Furthermore, this method has the advantage of enforcing current conservation at bifurcation and unification points, and allows for discrete coupling resistances.

  18. Dual-cloud point extraction coupled to high performance liquid chromatography for simultaneous determination of trace sulfonamide antimicrobials in urine and water samples.

    PubMed

    Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying

    2017-04-15

    Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Study on the extraction method of tidal flat area in northern Jiangsu Province based on remote sensing waterlines

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanyuan; Gao, Zhiqiang; Liu, Xiangyang; Xu, Ning; Liu, Chaoshun; Gao, Wei

    2016-09-01

    Reclamation caused a significant dynamic change in the coastal zone, the tidal flat zone is an unstable reserve land resource, it has important significance for its research. In order to realize the efficient extraction of the tidal flat area information, this paper takes Rudong County in Jiangsu Province as the research area, using the HJ1A/1B images as the data source, on the basis of previous research experience and literature review, the paper chooses the method of object-oriented classification as a semi-automatic extraction method to generate waterlines. Then waterlines are analyzed by DSAS software to obtain tide points, automatic extraction of outer boundary points are followed under the use of Python to determine the extent of tidal flats in 2014 of Rudong County, the extraction area was 55182hm2, the confusion matrix is used to verify the accuracy and the result shows that the kappa coefficient is 0.945. The method could improve deficiencies of previous studies and its available free nature on the Internet makes a generalization.

  20. A green and efficient procedure for the preconcentration and determination of cadmium, nickel and zinc from freshwater, hemodialysis solutions and tuna fish samples by cloud point extraction and flame atomic absorption spectrometry.

    PubMed

    Galbeiro, Rafaela; Garcia, Samara; Gaubeur, Ivanise

    2014-04-01

    Cloud point extraction (CPE) was used to simultaneously preconcentrate trace-level cadmium, nickel and zinc for determination by flame atomic absorption spectrometry (FAAS). 1-(2-Pyridilazo)-2-naphthol (PAN) was used as a complexing agent, and the metal complexes were extracted from the aqueous phase by the surfactant Triton X-114 ((1,1,3,3-tetramethylbutyl)phenyl-polyethylene glycol). Under optimized complexation and extraction conditions, the limits of detection were 0.37μgL(-1) (Cd), 2.6μgL(-1) (Ni) and 2.3μgL(-1) (Zn). This extraction was quantitative with a preconcentration factor of 30 and enrichment factor estimated to be 42, 40 and 43, respectively. The method was applied to different complex samples, and the accuracy was evaluated by analyzing a water standard reference material (NIST SRM 1643e), yielding results in agreement with the certified values. Copyright © 2013 Elsevier GmbH. All rights reserved.

  1. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    PubMed

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Mixed micelle cloud point-magnetic dispersive μ-solid phase extraction of doxazosin and alfuzosin

    NASA Astrophysics Data System (ADS)

    Gao, Nannan; Wu, Hao; Chang, Yafen; Guo, Xiaozhen; Zhang, Lizhen; Du, Liming; Fu, Yunlong

    2015-01-01

    Mixed micelle cloud point extraction (MM-CPE) combined with magnetic dispersive μ-solid phase extraction (MD-μ-SPE) has been developed as a new approach for the extraction of doxazosin (DOX) and alfuzosin (ALF) prior to fluorescence analysis. The mixed micelle anionic surfactant sodium dodecyl sulfate and non-ionic polyoxyethylene(7.5)nonylphenylether was used as the extraction solvent in MM-CPE, and diatomite bonding Fe3O4 magnetic nanoparticles were used as the adsorbent in MD-μ-SPE. The method was based on MM-CPE of DOX and ALF in the surfactant-rich phase. Magnetic materials were used to retrieve the surfactant-rich phase, which easily separated from the aqueous phase under magnetic field. At optimum conditions, a linear relationship between DOX and ALF was obtained in the range of 5-300 ng mL-1, and the limits of detection were 0.21 and 0.16 ng mL-1, respectively. The proposed method was successfully applied for the determination of the drugs in pharmaceutical preparations, urine samples, and plasma samples.

  3. Airborne LIDAR point cloud tower inclination judgment

    NASA Astrophysics Data System (ADS)

    liang, Chen; zhengjun, Liu; jianguo, Qian

    2016-11-01

    Inclined transmission line towers for the safe operation of the line caused a great threat, how to effectively, quickly and accurately perform inclined judgment tower of power supply company safety and security of supply has played a key role. In recent years, with the development of unmanned aerial vehicles, unmanned aerial vehicles equipped with a laser scanner, GPS, inertial navigation is one of the high-precision 3D Remote Sensing System in the electricity sector more and more. By airborne radar scan point cloud to visually show the whole picture of the three-dimensional spatial information of the power line corridors, such as the line facilities and equipment, terrain and trees. Currently, LIDAR point cloud research in the field has not yet formed an algorithm to determine tower inclination, the paper through the existing power line corridor on the tower base extraction, through their own tower shape characteristic analysis, a vertical stratification the method of combining convex hull algorithm for point cloud tower scarce two cases using two different methods for the tower was Inclined to judge, and the results with high reliability.

  4. Pc-Based Floating Point Imaging Workstation

    NASA Astrophysics Data System (ADS)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  5. Comparison of four MPPT techniques for PV systems

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Aillerie, M.

    2016-07-01

    The working behavior of a module / PV array is non-linear and highly dependent on working conditions. As a given condition, there is only one point at which the level of available power at its output is maximum. This point varies with time, enlightenment and temperature. To ensure optimum operation, the use of MPPT control allows us to extract the maximum power. This paper presents a comparative study of four widely-adopted MPPT algorithms, such as Perturb and Observe, Incremental Conductance, Measurements of the variation of the open circuit voltage or of the short-circuit current. Their performance is evaluated using, for all these techniques. In particular, this study compares the behaviors of each technique in presence of solar irradiation variations and temperature fluctuations. These MPPT techniques will be compared using the Matlab / Simulink tool.

  6. Self-position estimation using terrain shadows for precise planetary landing

    NASA Astrophysics Data System (ADS)

    Kuga, Tomoki; Kojima, Hirohisa

    2018-07-01

    In recent years, the investigation of moons and planets has attracted increasing attention in several countries. Furthermore, recently developed landing systems are now expected to reach more scientifically interesting areas close to hazardous terrain, requiring precise landing capabilities within a 100 m range of the target point. To achieve this, terrain-relative navigation (capable of estimating the position of a lander relative to the target point on the ground surface is actively being studied as an effective method for achieving highly accurate landings. This paper proposes a self-position estimation method using shadows on the terrain based on edge extraction from image processing algorithms. The effectiveness of the proposed method is validated through numerical simulations using images generated from a digital elevation model of simulated terrains.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.

    We used lattice QCD calculations with background magnetic fields to determine the magnetic moments of the octet baryons. Computations are performed at the physical value of the strange quark mass, and two values of the light quark mass, one corresponding to the SU(3) flavor-symmetric point, where the pion mass is m π ~ 800 MeV, and the other corresponding to a pion mass m π ~ 450 MeV. The moments are found to exhibit only mild pion-mass dependence when expressed in terms of appropriately chosen magneton units---the natural baryon magneton. This suggests that simple extrapolations can be used to determinemore » magnetic moments at the physical point, and extrapolated results are found to agree with experiment within uncertainties. A curious pattern is revealed among the anomalous baryon magnetic moments which is linked to the constituent quark model, however, careful scrutiny exposes additional features. Relations expected to hold in the large-N c limit of QCD are studied; and, in one case, the quark model prediction is significantly closer to the extracted values than the large-N c prediction. The magnetically coupled Λ-Σ 0 system is treated in detail at the SU(3) F point, with the lattice QCD results comparing favorably with predictions based on SU(3) F symmetry. Our analysis enables the first extraction of the isovector transition magnetic polarizability. The possibility that large magnetic fields stabilize strange matter is explored, but such a scenario is found to be unlikely.« less

  8. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  9. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  10. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  11. An atlas of H-alpha-emitting regions in M33: A systematic search for SS433 star candidates

    NASA Technical Reports Server (NTRS)

    Calzetti, Daniela; Kinney, Anne L.; Ford, Holland; Doggett, Jesse; Long, Knox S.

    1995-01-01

    We report finding charts and accurate positions for 432 compact H-alpha emitting regions in the Local Group galaxy M 33 (NGC 598), in an effort to isolate candidates for an SS433-like stellar system. The objects were extracted from narrow band images, centered in the rest-frame H-alpha (lambda 6563 A) and in the red continuum at 6100 A. The atlas is complete down to V approximately equal to 20 and includes 279 compact HII regions and 153 line emitting point-like sources. The point-like sources undoubtedly include a variety of objects: very small HII regions, early type stars with intense stellar winds, and Wolf-Rayet stars, but should also contain objects with the characteristics of SS433. This extensive survey of compact H-alpha regions in M 33 is a first step towards the identification of peculiar stellar systems like SS433 in external galaxies.

  12. Comparison of FRF measurements and mode shapes determined using optically image based, laser, and accelerometer measurements

    NASA Astrophysics Data System (ADS)

    Warren, Christopher; Niezrecki, Christopher; Avitabile, Peter; Pingle, Pawan

    2011-08-01

    Today, accelerometers and laser Doppler vibrometers are widely accepted as valid measurement tools for structural dynamic measurements. However, limitations of these transducers prevent the accurate measurement of some phenomena. For example, accelerometers typically measure motion at a limited number of discrete points and can mass load a structure. Scanning laser vibrometers have a very wide frequency range and can measure many points without mass-loading, but are sensitive to large displacements and can have lengthy acquisition times due to sequential measurements. Image-based stereo-photogrammetry techniques provide additional measurement capabilities that compliment the current array of measurement systems by providing an alternative that favors high-displacement and low-frequency vibrations typically difficult to measure with accelerometers and laser vibrometers. Within this paper, digital image correlation, three-dimensional (3D) point-tracking, 3D laser vibrometry, and accelerometer measurements are all used to measure the dynamics of a structure to compare each of the techniques. Each approach has its benefits and drawbacks, so comparative measurements are made using these approaches to show some of the strengths and weaknesses of each technique. Additionally, the displacements determined using 3D point-tracking are used to calculate frequency response functions, from which mode shapes are extracted. The image-based frequency response functions (FRFs) are compared to those obtained by collocated accelerometers. Extracted mode shapes are then compared to those of a previously validated finite element model (FEM) of the test structure and are shown to have excellent agreement between the FEM and the conventional measurement approaches when compared using the Modal Assurance Criterion (MAC) and Pseudo-Orthogonality Check (POC).

  13. Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

    NASA Astrophysics Data System (ADS)

    Marques, Luís.; Roca Cladera, Josep; Tenedório, José António

    2017-10-01

    The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.

  14. Dispersive liquid-liquid microextraction based on solidification of floating organic droplet followed by high-performance liquid chromatography with ultraviolet detection and liquid chromatography-tandem mass spectrometry for the determination of triclosan and 2,4-dichlorophenol in water samples.

    PubMed

    Zheng, Cao; Zhao, Jing; Bao, Peng; Gao, Jin; He, Jin

    2011-06-24

    A novel, simple and efficient dispersive liquid-liquid microextraction based on solidification of floating organic droplet (DLLME-SFO) technique coupled with high-performance liquid chromatography with ultraviolet detection (HPLC-UV) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed for the determination of triclosan and its degradation product 2,4-dichlorophenol in real water samples. The extraction solvent used in this work is of low density, low volatility, low toxicity and proper melting point around room temperature. The extractant droplets can be collected easily by solidifying it at a lower temperature. Parameters that affect the extraction efficiency, including type and volume of extraction solvent and dispersive solvent, salt effect, pH and extraction time, were investigated and optimized in a 5 mL sample system by HPLC-UV. Under the optimum conditions (extraction solvent: 12 μL of 1-dodecanol; dispersive solvent: 300 of μL acetonitrile; sample pH: 6.0; extraction time: 1 min), the limits of detection (LODs) of the pretreatment method combined with LC-MS/MS were in the range of 0.002-0.02 μg L(-1) which are lower than or comparable with other reported approaches applied to the determination of the same compounds. Wide linearities, good precisions and satisfactory relative recoveries were also obtained. The proposed technique was successfully applied to determine triclosan and 2,4-dichlorophenol in real water samples. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems

    PubMed Central

    Hong, Seunghwan; Park, Ilsuk; Lee, Jisang; Lim, Kwangyong; Choi, Yoonjo; Sohn, Hong-Gyoo

    2017-01-01

    This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively. PMID:28264457

  16. Shape-based human detection for threat assessment

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.

    2004-07-01

    Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.

  17. Elastic dipoles of point defects from atomistic simulations

    NASA Astrophysics Data System (ADS)

    Varvenne, Céline; Clouet, Emmanuel

    2017-12-01

    The interaction of point defects with an external stress field or with other structural defects is usually well described within continuum elasticity by the elastic dipole approximation. Extraction of the elastic dipoles from atomistic simulations is therefore a fundamental step to connect an atomistic description of the defect with continuum models. This can be done either by a fitting of the point-defect displacement field, by a summation of the Kanzaki forces, or by a linking equation to the residual stress. We perform here a detailed comparison of these different available methods to extract elastic dipoles, and show that they all lead to the same values when the supercell of the atomistic simulations is large enough and when the anharmonic region around the point defect is correctly handled. But, for small simulation cells compatible with ab initio calculations, only the definition through the residual stress appears tractable. The approach is illustrated by considering various point defects (vacancy, self-interstitial, and hydrogen solute atom) in zirconium, using both empirical potentials and ab initio calculations.

  18. Classification of jet fuel properties by near-infrared spectroscopy using fuzzy rule-building expert systems and support vector machines.

    PubMed

    Xu, Zhanfeng; Bunker, Christopher E; Harrington, Peter de B

    2010-11-01

    Monitoring the changes of jet fuel physical properties is important because fuel used in high-performance aircraft must meet rigorous specifications. Near-infrared (NIR) spectroscopy is a fast method to characterize fuels. Because of the complexity of NIR spectral data, chemometric techniques are used to extract relevant information from spectral data to accurately classify physical properties of complex fuel samples. In this work, discrimination of fuel types and classification of flash point, freezing point, boiling point (10%, v/v), boiling point (50%, v/v), and boiling point (90%, v/v) of jet fuels (JP-5, JP-8, Jet A, and Jet A1) were investigated. Each physical property was divided into three classes, low, medium, and high ranges, using two evaluations with different class boundary definitions. The class boundaries function as the threshold to alarm when the fuel properties change. Optimal partial least squares discriminant analysis (oPLS-DA), fuzzy rule-building expert system (FuRES), and support vector machines (SVM) were used to build the calibration models between the NIR spectra and classes of physical property of jet fuels. OPLS-DA, FuRES, and SVM were compared with respect to prediction accuracy. The validation of the calibration model was conducted by applying bootstrap Latin partition (BLP), which gives a measure of precision. Prediction accuracy of 97 ± 2% of the flash point, 94 ± 2% of freezing point, 99 ± 1% of the boiling point (10%, v/v), 98 ± 2% of the boiling point (50%, v/v), and 96 ± 1% of the boiling point (90%, v/v) were obtained by FuRES in one boundaries definition. Both FuRES and SVM obtained statistically better prediction accuracy over those obtained by oPLS-DA. The results indicate that combined with chemometric classifiers NIR spectroscopy could be a fast method to monitor the changes of jet fuel physical properties.

  19. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: a multivariate study.

    PubMed

    Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-10

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Quinoline Alkaloids Isolated from Choisya Aztec-Pearl and Their Contribution to the Overall Antinociceptive Activity of This Plant.

    PubMed

    de Carvalho, Patricia Ribeiro; Ropero, Denise Ricoy; Pinheiro, Mariana Martins; Fernandes, Patricia Dias; Boylan, Fabio

    2016-01-01

    Choisya 'Aztec-Pearl', a hybrid of Choisya ternata and Choisya dumosa var. arizonica, had the antinociceptive activity in the ethanol extract (EECA) of its leaves evaluated. Two quinoline alkaloids, anhydroevoxine (A) and choisyine (C), isolated from these leaves were also tested. The results obtained pointed out to a very high antinociceptive activity measured by the hot plate model for EECA (at doses of 10, 30 and 100 mg/kg) as well as for A and C (at doses of 1, 3 and 10 mg/kg). The magnitude of the activity was two-fold higher than the one observed for the morphine treated animals for the higher doses of extracts/compounds (30, 100 mg/kg and 3, 10 mg/kg respectively). The mechanism of action for this activity was also investigated and it seems that for EECA as well as A and C, the opiate system plays an important role. Results have also shown that the nitric oxide (NO) system also play a pivotal role in the case of EECA and A while for C it seems that the cholinergic system have some involvement. The acute toxicity was evaluated for EECA with results showing no important toxic effect.

  1. Integrated Magneto-Chemical Sensor For On-Site Food Allergen Detection.

    PubMed

    Lin, Hsing-Ying; Huang, Chen-Han; Park, Jongmin; Pathania, Divya; Castro, Cesar M; Fasano, Alessio; Weissleder, Ralph; Lee, Hakho

    2017-10-24

    Adverse food reactions, including food allergies, food sensitivities, and autoimmune reaction (e.g., celiac disease) affect 5-15% of the population and remain a considerable public health problem requiring stringent food avoidance and epinephrine availability for emergency events. Avoiding problematic foods is practically difficult, given current reliance on prepared foods and out-of-home meals. In response, we developed a portable, point-of-use detection technology, termed integrated exogenous antigen testing (iEAT). The system consists of a disposable antigen extraction device coupled with an electronic keychain reader for rapid sensing and communication. We optimized the prototype iEAT system to detect five major food antigens in peanuts, hazelnuts, wheat, milk, and eggs. Antigen extraction and detection with iEAT requires <10 min and achieves high-detection sensitivities (e.g., 0.1 mg/kg for gluten, lower than regulatory limits of 20 mg/kg). When testing under restaurant conditions, we were able to detect hidden food antigens such as gluten within "gluten-free" food items. The small size and rapid, simple testing of the iEAT system should help not only consumers but also other key stakeholders such as clinicians, food industries, and regulators to enhance food safety.

  2. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    PubMed

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.

  3. Bioaerosol DNA Extraction Technique from Air Filters Collected from Marine and Freshwater Locations

    NASA Astrophysics Data System (ADS)

    Beckwith, M.; Crandall, S. G.; Barnes, A.; Paytan, A.

    2015-12-01

    Bioaerosols are composed of microorganisms suspended in air. Among these organisms include bacteria, fungi, virus, and protists. Microbes introduced into the atmosphere can drift, primarily by wind, into natural environments different from their point of origin. Although bioaerosols can impact atmospheric dynamics as well as the ecology and biogeochemistry of terrestrial systems, very little is known about the composition of bioaerosols collected from marine and freshwater environments. The first step to determine composition of airborne microbes is to successfully extract environmental DNA from air filters. We asked 1) can DNA be extracted from quartz (SiO2) air filters? and 2) how can we optimize the DNA yield for downstream metagenomic sequencing? Aerosol filters were collected and archived on a weekly basis from aquatic sites (USA, Bermuda, Israel) over the course of 10 years. We successfully extracted DNA from a subsample of ~ 20 filters. We modified a DNA extraction protocol (Qiagen) by adding a beadbeating step to mechanically shear cell walls in order to optimize our DNA product. We quantified our DNA yield using a spectrophotometer (Nanodrop 1000). Results indicate that DNA can indeed be extracted from quartz filters. The additional beadbeating step helped increase our yield - up to twice as much DNA product was obtained compared to when this step was omitted. Moreover, bioaerosol DNA content does vary across time. For instance, the DNA extracted from filters from Lake Tahoe, USA collected near the end of June decreased from 9.9 ng/μL in 2007 to 3.8 ng/μL in 2008. Further next-generation sequencing analysis of our extracted DNA will be performed to determine the composition of these microbes. We will also model the meteorological and chemical factors that are good predictors for microbial composition for our samples over time and space.

  4. Optimization of Pressurized Liquid Extraction of Three Major Acetophenones from Cynanchum bungei Using a Box-Behnken Design

    PubMed Central

    Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui

    2012-01-01

    In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079

  5. Detection of enteric viruses in shellfish

    USDA-ARS?s Scientific Manuscript database

    Norovirus and hepatitis A virus contamination are significant threats to the safety of shellfish and other foods. Methods for the extraction and assay of these viruses from shellfish are complex, time consuming, and technically challenging. Here, we itemize some of the salient points in extracting...

  6. Radar - 449MHz - Forks, WA (FKS) - Raw Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  7. Radar - 449MHz - North Bend, OR (OTH) - Raw Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  8. Radar - 449MHz - North Bend, OR (OTH) - Reviewed Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  9. Radar - 449MHz - Forks, WA (FKS) - Reviewed Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  10. Radar - 449MHz - Astoria, OR (AST) - Reviewed Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  11. Radar - 449MHz - Astoria, OR (AST) - Raw Data

    DOE Data Explorer

    Gottas, Daniel

    2018-06-25

    **Winds.** A radar wind profiler measures the Doppler shift of electromagnetic energy scattered back from atmospheric turbulence and hydrometeors along 3-5 vertical and off-vertical point beam directions. Back-scattered signal strength and radial-component velocities are remotely sensed along all beam directions and are combined to derive the horizontal wind field over the radar. These data typically are sampled and averaged hourly and usually have 6-m and/or 100-m vertical resolutions up to 4 km for the 915 MHz and 8 km for the 449 MHz systems. **Temperature.** To measure atmospheric temperature, a radio acoustic sounding system (RASS) is used in conjunction with the wind profile. These data typically are sampled and averaged for five minutes each hour and have a 60-m vertical resolution up to 1.5 km for the 915 MHz and 60 m up to 3.5 km for the 449 MHz. **Moments and Spectra.** The raw spectra and moments data are available for all dwells along each beam and are stored in daily files. For each day, there are files labeled "header" and "data." These files are generated by the radar data acquisition system (LAP-XM) and are encoded in a proprietary binary format. Values of spectral density at each Doppler velocity (FFT point), as well as the radial velocity, signal-to-noise ratio, and spectra width for the selected signal peak are included in these files. Attached zip files, *449mhz-spectra-data-extraction.zip* and *449mhz-moment-data-extraction.zip*, include executables to unpack the spectra, (GetSpectra32.exe) and moments (GetMomSp32.exe), respectively. Documentation on usage and output file formats also are included in the zip files.

  12. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  13. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  14. From data to information and knowledge for geospatial applications

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.; Yoon, T.

    2006-12-01

    An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.

  15. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    PubMed

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium.

    PubMed

    Rusinek, Cory A; Bange, Adam; Papautsky, Ian; Heineman, William R

    2015-06-16

    Cloud point extraction (CPE) is a well-established technique for the preconcentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd(2+)) by anodic stripping voltammetry (ASV). Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd(2+) to form an extractable ion pair. This offers good selectivity for Cd(2+) as no interferences were observed from other heavy metal ions. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22-25 °C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd(2+) of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. ASV with CPE gave a 20x decrease (4.0 ppb) in the detection limit compared to ASV without CPE. The suitability of this procedure for the analysis of tap and river water samples was demonstrated. This simple, versatile, environmentally friendly, and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods.

  17. Current trends in sample preparation for cosmetic analysis.

    PubMed

    Zhong, Zhixiong; Li, Gongke

    2017-01-01

    The widespread applications of cosmetics in modern life make their analysis particularly important from a safety point of view. There is a wide variety of restricted ingredients and prohibited substances that primarily influence the safety of cosmetics. Sample preparation for cosmetic analysis is a crucial step as the complex matrices may seriously interfere with the determination of target analytes. In this review, some new developments (2010-2016) in sample preparation techniques for cosmetic analysis, including liquid-phase microextraction, solid-phase microextraction, matrix solid-phase dispersion, pressurized liquid extraction, cloud point extraction, ultrasound-assisted extraction, and microwave digestion, are presented. Furthermore, the research and progress in sample preparation techniques and their applications in the separation and purification of allowed ingredients and prohibited substances are reviewed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. [Determination of heavy metals for RoHS compliance by ICP-OES spectrometry coupled with microwave extraction system].

    PubMed

    Hua, Li; Wu, Yi-Ping; An, Bing; Lai, Xiao-Wei

    2008-11-01

    The harm of heavy metals contained in electronic and electrical equipment (EEE) on environment is of high concern by human. Aiming to handle the great challenge of RoHS compliance, the determinations of trace or ultratrace chromium (Cr), cadmium (Cd), mercury (Hg) and lead (Pb) by inductively coupled plasma optical emission spectrometry (ICP-OES) was performed in the present paper, wherein, microwave extraction technology was used to prepare the sample solutions. In addition, the precision, recovery, repeatability and interference issues of this method were also discussed. The results exhibited that using the microwave extraction system to prepare samples is more quick, lossless, contamination-free in comparison with the conventional extraction methods such as dry ashing, wet-oven extraction etc. By analyzing the recoveries of these four heavy metals over different working time and wavelengths, the good recovery range between 85% and 115% showed that there was only tiny loss or contamination during the process of microwave extraction, sample introduction and ICP detection. Repeatability experiments proved that ICP plasma had a good stability during the working time and the matrix effect was small. Interference was a problem troublesome for atomic absorption spectrometry (AAS), however, the techniques of standard additions or inter-element correction (IEC) method can effectively eliminated the interferences of Ni, As, Fe etc. with the Cd determination. By employing the multi-wavelengths and two correction point methods, the issues of background curve sloping shift and spectra overlap were successfully overcome. Besides, for the determinations of trace heavy metal elements, the relative standard deviation (RSD) was less than 3% and the detection limits were less than 1 microg x L(-10 (3sigma, n = 5) for samples, standard solutions, and standard additions, which proved that ICP-OES has a good precision and high reliability. This provided a reliable technique support for electronic and electrical (EE) industries to comply with RoHS directive.

  19. Quantitation of repaglinide and metabolites in mouse whole-body thin tissue sections using droplet-based liquid microjunction surface sampling-high-performance liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J; Kertesz, Vilmos; Gan, Jinping

    2016-03-25

    Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites were studied. Major organs (brain, lung, liver, kidney and muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed the same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. In addition, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Multiplexed Sequence Encoding: A Framework for DNA Communication.

    PubMed

    Zakeri, Bijan; Carr, Peter A; Lu, Timothy K

    2016-01-01

    Synthetic DNA has great propensity for efficiently and stably storing non-biological information. With DNA writing and reading technologies rapidly advancing, new applications for synthetic DNA are emerging in data storage and communication. Traditionally, DNA communication has focused on the encoding and transfer of complete sets of information. Here, we explore the use of DNA for the communication of short messages that are fragmented across multiple distinct DNA molecules. We identified three pivotal points in a communication-data encoding, data transfer & data extraction-and developed novel tools to enable communication via molecules of DNA. To address data encoding, we designed DNA-based individualized keyboards (iKeys) to convert plaintext into DNA, while reducing the occurrence of DNA homopolymers to improve synthesis and sequencing processes. To address data transfer, we implemented a secret-sharing system-Multiplexed Sequence Encoding (MuSE)-that conceals messages between multiple distinct DNA molecules, requiring a combination key to reveal messages. To address data extraction, we achieved the first instance of chromatogram patterning through multiplexed sequencing, thereby enabling a new method for data extraction. We envision these approaches will enable more widespread communication of information via DNA.

Top