Fluency heuristic: a model of how the mind exploits a by-product of information retrieval.
Hertwig, Ralph; Herzog, Stefan M; Schooler, Lael J; Reimer, Torsten
2008-09-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world. (c) 2008 APA, all rights reserved.
A semi-automatic method for extracting thin line structures in images as rooted tree network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brazzini, Jacopo; Dillard, Scott; Soille, Pierre
2010-01-01
This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less
Automatic laser welding and milling with in situ inline coherent imaging.
Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M
2014-11-01
Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.
Automatic SAR/optical cross-matching for GCP monograph generation
NASA Astrophysics Data System (ADS)
Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa
2016-10-01
Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.
Robust automatic line scratch detection in films.
Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick
2014-03-01
Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.
Oliveira, Hugo M; Segundo, Marcela A; Lima, José L F C; Miró, Manuel; Cerdà, Victor
2010-05-01
In the present work, it is proposed, for the first time, an on-line automatic renewable molecularly imprinted solid-phase extraction (MISPE) protocol for sample preparation prior to liquid chromatographic analysis. The automatic microscale procedure was based on the bead injection (BI) concept under the lab-on-valve (LOV) format, using a multisyringe burette as propulsion unit for handling solutions and suspensions. A high precision on handling the suspensions containing irregularly shaped molecularly imprinted polymer (MIP) particles was attained, enabling the use of commercial MIP as renewable sorbent. The features of the proposed BI-LOV manifold also allowed a strict control of the different steps within the extraction protocol, which are essential for promoting selective interactions in the cavities of the MIP. By using this on-line method, it was possible to extract and quantify riboflavin from different foodstuff samples in the range between 0.450 and 5.00 mg L(-1) after processing 1,000 microL of sample (infant milk, pig liver extract, and energy drink) without any prior treatment. For milk samples, LOD and LOQ values were 0.05 and 0.17 mg L(-1), respectively. The method was successfully applied to the analysis of two certified reference materials (NIST 1846 and BCR 487) with high precision (RSD < 5.5%). Considering the downscale and simplification of the sample preparation protocol and the simultaneous performance of extraction and chromatographic assays, a cost-effective and enhanced throughput (six determinations per hour) methodology for determination of riboflavin in foodstuff samples is deployed here.
Automated UAV-based video exploitation using service oriented architecture framework
NASA Astrophysics Data System (ADS)
Se, Stephen; Nadeau, Christian; Wood, Scott
2011-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.
ERIC Educational Resources Information Center
Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa
2009-01-01
In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…
Research progress of on-line automatic monitoring of chemical oxygen demand (COD) of water
NASA Astrophysics Data System (ADS)
Cai, Youfa; Fu, Xing; Gao, Xiaolu; Li, Lianyin
2018-02-01
With the increasingly stricter control of pollutant emission in China, the on-line automatic monitoring of water quality is particularly urgent. The chemical oxygen demand (COD) is a comprehensive index to measure the contamination caused by organic matters, and thus it is taken as one important index of energy-saving and emission reduction in China’s “Twelve-Five” program. So far, the COD on-line automatic monitoring instrument has played an important role in the field of sewage monitoring. This paper reviews the existing methods to achieve on-line automatic monitoring of COD, and on the basis, points out the future trend of the COD on-line automatic monitoring instruments.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
An algorithm for power line detection and warning based on a millimeter-wave radar video.
Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting
2011-12-01
Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm. © 2011 IEEE
Stock, Ann-Kathrin; Steenbergen, Laura; Colzato, Lorenza; Beste, Christian
2016-12-01
Cognitive control is adaptive in the sense that it inhibits automatic processes to optimize goal-directed behavior, but high levels of control may also have detrimental effects in case they suppress beneficial automatisms. Until now, the system neurophysiological mechanisms and functional neuroanatomy underlying these adverse effects of cognitive control have remained elusive. This question was examined by analyzing the automatic exploitation of a beneficial implicit predictive feature under conditions of high versus low cognitive control demands, combining event-related potentials (ERPs) and source localization. It was found that cognitive control prohibits the beneficial automatic exploitation of additional implicit information when task demands are high. Bottom-up perceptual and attentional selection processes (P1 and N1 ERPs) are not modulated by this, but the automatic exploitation of beneficial predictive information in case of low cognitive control demands was associated with larger response-locked P3 amplitudes and stronger activation of the right inferior frontal gyrus (rIFG, BA47). This suggests that the rIFG plays a key role in the detection of relevant task cues, the exploitation of alternative task sets, and the automatic (bottom-up) implementation and reprogramming of action plans. Moreover, N450 amplitudes were larger under high cognitive control demands, which was associated with activity differences in the right medial frontal gyrus (BA9). This most likely reflects a stronger exploitation of explicit task sets which hinders the exploration of the implicit beneficial information in case of high cognitive control demands. Hum Brain Mapp 37:4511-4522, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval
ERIC Educational Resources Information Center
Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten
2008-01-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…
Automated kidney detection for 3D ultrasound using scan line searching
NASA Astrophysics Data System (ADS)
Noll, Matthias; Nadolny, Anne; Wesarg, Stefan
2016-04-01
Ultrasound (U/S) is a fast and non-expensive imaging modality that is used for the examination of various anatomical structures, e.g. the kidneys. One important task for automatic organ tracking or computer-aided diagnosis is the identification of the organ region. During this process the exact information about the transducer location and orientation is usually unavailable. This renders the implementation of such automatic methods exceedingly challenging. In this work we like to introduce a new automatic method for the detection of the kidney in 3D U/S images. This novel technique analyses the U/S image data along virtual scan lines. Here, characteristic texture changes when entering and leaving the symmetric tissue regions of the renal cortex are searched for. A subsequent feature accumulation along a second scan direction produces a 2D heat map of renal cortex candidates, from which the kidney location is extracted in two steps. First, the strongest candidate as well as its counterpart are extracted by heat map intensity ranking and renal cortex size analysis. This process exploits the heat map gap caused by the renal pelvis region. Substituting the renal pelvis detection with this combined cortex tissue feature increases the detection robustness. In contrast to model based methods that generate characteristic pattern matches, our method is simpler and therefore faster. An evaluation performed on 61 3D U/S data sets showed, that in 55 cases showing none or minor shadowing the kidney location could be correctly identified.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis
NASA Astrophysics Data System (ADS)
Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.
2013-11-01
Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.
Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Marghany, Maged
2016-10-01
In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiveroperating characteristic (ROC) curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.
Challenges in automatic sorting of construction and demolition waste by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Hollstein, Frank; Cacho, Íñigo; Arnaiz, Sixto; Wohllebe, Markus
2016-05-01
EU-28 countries currently generate 460 Mt/year of construction and demolition waste (C&DW) and the generation rate is expected to reach around 570 Mt/year between 2025 and 2030. There is great potential for recycling C&DW materials since they are massively produced and content valuable resources. But new C&DW is more complex than existing one and there is a need for shifting from traditional recycling approaches to novel recycling solutions. One basic step to achieve this objective is an improvement in (automatic) sorting technology. Hyperspectral Imaging is a promising candidate to support the process. However, the industrial distribution of Hyperspectral Imaging in the C&DW recycling branch is currently insufficiently pronounced due to high investment costs, still insufficient robustness of optical sensor hardware in harsh ambient conditions and, because of the need of sensor fusion, not well-engineered special software methods to perform the (on line) sorting tasks. Thereby frame rates of over 300 Hz are needed for a successful sorting result. Currently the biggest challenges with regard to C&DW detection cover the need of overlapping VIS, NIR and SWIR hyperspectral images in time and space, in particular for selective recognition of contaminated particles. In the study on hand a new approach for hyperspectral imagers is presented by exploiting SWIR hyperspectral information in real time (with 300 Hz). The contribution describes both laboratory results with regard to optical detection of the most important C&DW material composites as well as a development path for an industrial implementation in automatic sorting and separation lines. The main focus is placed on the closure of the two recycling circuits "grey to grey" and "red to red" because of their outstanding potential for sustainability in conservation of construction resources.
Design of Provider-Provisioned Website Protection Scheme against Malware Distribution
NASA Astrophysics Data System (ADS)
Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka
Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.
Automatic systems and the low-level wind hazard
NASA Technical Reports Server (NTRS)
Schaeffer, Dwight R.
1987-01-01
Automatic flight control systems provide means for significantly enhancing survivability in severe wind hazards. The technology required to produce the necessary control algorithms is available and has been made technically feasible by the advent of digital flight control systems and accurate, low-noise sensors, especially strap-down inertial sensors. The application of this technology and these means has not generally been enabled except for automatic landing systems, and even then the potential has not been fully exploited. To fully exploit the potential of automatic systems for enhancing safety in wind hazards requires providing incentives, creating demand, inspiring competition, education, and eliminating prejudicial disincentitives to overcome the economic penalties associated with the extensive and riskly development and certification of these systems. If these changes will come about at all, it will likely be through changes in the regulations provided by the certifying agencies.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Bayesian least squares deconvolution
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
ERIC Educational Resources Information Center
Kurtz, Peter; And Others
This report is concerned with the implementation of two interrelated computer systems: an automatic document analysis and classification package, and an on-line interactive information retrieval system which utilizes the information gathered during the automatic classification phase. Well-known techniques developed by Salton and Dennis have been…
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
NASA Astrophysics Data System (ADS)
Wasserthal, Christian; Engel, Karin; Rink, Karsten; Brechmann, Andr'e.
We propose an automatic procedure for the correct segmentation of grey and white matter in MR data sets of the human brain. Our method exploits general anatomical knowledge for the initial segmentation and for the subsequent refinement of the estimation of the cortical grey matter. Our results are comparable to manual segmentations.
Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.
2009-01-01
In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083
SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.
Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara
2014-12-01
Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines. © 2014 The Authors Genes to Cells © 2014 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
Fully automatic bone age estimation from left hand MR images.
Stern, Darko; Ebner, Thomas; Bischof, Horst; Grassegger, Sabine; Ehammer, Thomas; Urschler, Martin
2014-01-01
There has recently been an increased demand in bone age estimation (BAE) of living individuals and human remains in legal medicine applications. A severe drawback of established BAE techniques based on X-ray images is radiation exposure, since many countries prohibit scanning involving ionizing radiation without diagnostic reasons. We propose a completely automated method for BAE based on volumetric hand MRI images. On our database of 56 male caucasian subjects between 13 and 19 years, we are able to estimate the subjects age with a mean difference of 0.85 ± 0.58 years compared to the chronological age, which is in line with radiologist results using established radiographic methods. We see this work as a promising first step towards a novel MRI based bone age estimation system, with the key benefits of lacking exposure to ionizing radiation and higher accuracy due to exploitation of volumetric data.
Development of monitoring and control system for a mine main fan based on frequency converter
NASA Astrophysics Data System (ADS)
Zhang, Y. C.; Zhang, R. W.; Kong, X. Z.; Y Gong, J.; Chen, Q. G.
2013-12-01
In the process of mine exploitation, the requirement of air flow rate often changes. The procedure of traditional control mode of the fan is complex and it is hard to meet the worksite requirement for air. This system is based on Principal Computer (PC) monitoring system and high performance PLC control system. In this system, the frequency converter is adapted to adjust the fan speed and the air of worksite can be regulated steplessly. The function of the monitoring and control system contains on-line monitoring and centralized control. The system can monitor the parameters of fan in real-time, control the operation of frequency converter, as well as, control the fan and its accessory equipments. At the same time, the automation level of the system is highly, the field equipments can be monitored and controlled automatically. So, the system is an important safeguard for mine production.
Using Multithreading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Bailey, David H. (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes. The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the question phase of FE applications on triangular meshes, and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments on EARTH-SP2, an implementation of EARTH on the IBM SP2, with different load balancing strategies that are built into the runtime system.
VizieR Online Data Catalog: Hubble Legacy Archive ACS grism data (Kuemmel+, 2011)
NASA Astrophysics Data System (ADS)
Kuemmel, M.; Rosati, P.; Fosbury, R.; Haase, J.; Hook, R. N.; Kuntschner, H.; Lombardi, M.; Micol, A.; Nilsson, K. K.; Stoehr, F.; Walsh, J. R.
2011-09-01
A public release of slitless spectra, obtained with ACS/WFC and the G800L grism, is presented. Spectra were automatically extracted in a uniform way from 153 archival fields (or "associations") distributed across the two Galactic caps, covering all observations to 2008. The ACS G800L grism provides a wavelength range of 0.55-1.00um, with a dispersion of 40Å/pixel and a resolution of ~80Å for point-like sources. The ACS G800L images and matched direct images were reduced with an automatic pipeline that handles all steps from archive retrieval, alignment and astrometric calibration, direct image combination, catalogue generation, spectral extraction and collection of metadata. The large number of extracted spectra (73,581) demanded automatic methods for quality control and an automated classification algorithm was trained on the visual inspection of several thousand spectra. The final sample of quality controlled spectra includes 47919 datasets (65% of the total number of extracted spectra) for 32149 unique objects, with a median iAB-band magnitude of 23.7, reaching 26.5 AB for the faintest objects. Each released dataset contains science-ready 1D and 2D spectra, as well as multi-band image cutouts of corresponding sources and a useful preview page summarising the direct and slitless data, astrometric and photometric parameters. This release is part of the continuing effort to enhance the content of the Hubble Legacy Archive (HLA) with highly processed data products which significantly facilitate the scientific exploitation of the Hubble data. In order to characterize the slitless spectra, emission-line flux and equivalent width sensitivity of the ACS data were compared with public ground-based spectra in the GOODS-South field. An example list of emission line galaxies with two or more identified lines is also included, covering the redshift range 0.2-4.6. Almost all redshift determinations outside of the GOODS fields are new. The scope of science projects possible with the ACS slitless release data is large, from studies of Galactic stars to searches for high redshift galaxies. (3 data files).
The Hubble Legacy Archive ACS grism data
NASA Astrophysics Data System (ADS)
Kümmel, M.; Rosati, P.; Fosbury, R.; Haase, J.; Hook, R. N.; Kuntschner, H.; Lombardi, M.; Micol, A.; Nilsson, K. K.; Stoehr, F.; Walsh, J. R.
2011-06-01
A public release of slitless spectra, obtained with ACS/WFC and the G800L grism, is presented. Spectra were automatically extracted in a uniform way from 153 archival fields (or "associations") distributed across the two Galactic caps, covering all observations to 2008. The ACS G800L grism provides a wavelength range of 0.55-1.00 μm, with a dispersion of 40 Å/pixel and a resolution of ~80 Å for point-like sources. The ACS G800L images and matched direct images were reduced with an automatic pipeline that handles all steps from archive retrieval, alignment and astrometric calibration, direct image combination, catalogue generation, spectral extraction and collection of metadata. The large number of extracted spectra (73,581) demanded automatic methods for quality control and an automated classification algorithm was trained on the visual inspection of several thousand spectra. The final sample of quality controlled spectra includes 47 919 datasets (65% of the total number of extracted spectra) for 32 149 unique objects, with a median iAB-band magnitude of 23.7, reaching 26.5 AB for the faintest objects. Each released dataset contains science-ready 1D and 2D spectra, as well as multi-band image cutouts of corresponding sources and a useful preview page summarising the direct and slitless data, astrometric and photometric parameters. This release is part of the continuing effort to enhance the content of the Hubble Legacy Archive (HLA) with highly processed data products which significantly facilitate the scientific exploitation of the Hubble data. In order to characterize the slitless spectra, emission-line flux and equivalent width sensitivity of the ACS data were compared with public ground-based spectra in the GOODS-South field. An example list of emission line galaxies with two or more identified lines is also included, covering the redshift range 0.2 - 4.6. Almost all redshift determinations outside of the GOODS fields are new. The scope of science projects possible with the ACS slitless release data is large, from studies of Galactic stars to searches for high redshift galaxies.
Finding geospatial pattern of unstructured data by clustering routes
NASA Astrophysics Data System (ADS)
Boustani, M.; Mattmann, C. A.; Ramirez, P.; Burke, W.
2016-12-01
Today the majority of data generated has a geospatial context to it. Either in attribute form as a latitude or longitude, or name of location or cross referenceable using other means such as an external gazetteer or location service. Our research is interested in exploiting geospatial location and context in unstructured data such as that found on the web in HTML pages, images, videos, documents, and other areas, and in structured information repositories found on intranets, in scientific environments, and otherwise. We are working together on the DARPA MEMEX project to exploit open source software tools such as the Lucene Geo Gazetteer, Apache Tika, Apache Lucene, and Apache OpenNLP, to automatically extract, and make meaning out of geospatial information. In particular, we are interested in unstructured descriptors e.g., a phone number, or a named entity, and the ability to automatically learn geospatial paths related to these descriptors. For example, a particular phone number may represent an entity that travels on a monthly basis, according to easily identifiable and somes more difficult to track patterns. We will present a set of automatic techniques to extract descriptors, and then to geospatially infer their paths across unstructured data.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
A Network of Automatic Control Web-Based Laboratories
ERIC Educational Resources Information Center
Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian
2011-01-01
This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…
Some Problems of Exploitation of Jet Turbine Aircraft Engines of Lot Polish Air Lines,
1977-04-26
CI ‘AD~AOII6 221 FOREIGN TECHNOLOGY DIV WR IGHT—PATTERSON AFB OHIO F/I 21/5SOME PROBLEMS OF EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES O—CTC(U...EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES OF LOT POLISH AIR LINE S By: Andrzej Slodownik English pages: 1~ Source: Technika Lotnicza I Astronautyczna...SOME PROBLEMS OF EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES OF LOT POLISH AIR LINES Andrzej Slodownik , M. Eng . FTD— ID ( RS) I— 0 1475 — 77 I
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
System for automatically switching transformer coupled lines
NASA Technical Reports Server (NTRS)
Dwinell, W. S. (Inventor)
1979-01-01
A system is presented for automatically controlling transformer coupled alternating current electric lines. The secondary winding of each transformer is provided with a center tap. A switching circuit is connected to the center taps of a pair of secondary windings and includes a switch controller. An impedance is connected between the center taps of the opposite pair of secondary windings. The switching circuit has continuity when the AC lines are continuous and discontinuity with any disconnect of the AC lines. Normally open switching means are provided in at least one AC line. The switch controller automatically opens the switching means when the AC lines become separated.
Reading between the Lines: Accessing Information via "Youtube's" Automatic Captioning
ERIC Educational Resources Information Center
Smith, Chad; Allman, Tamby; Crocker, Samantha
2017-01-01
This study and discussion center upon the use of "YouTube's" automatic captioning feature with college-age adult readers. The study required 75 participants with college experience to view brief middle school science videos with automatic captioning on "YouTube" and answer comprehension questions based on material presented…
NASA Astrophysics Data System (ADS)
Moshavegh, Ramin; Hansen, Kristoffer Lindskov; Møller Sørensen, Hasse; Hemmsen, Martin Christian; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-04-01
This paper presents a novel automatic method for detection of B-lines (comet-tail artifacts) in lung ultrasound scans. B-lines are the most commonly used artifacts for analyzing the pulmonary edema. They appear as laser-like vertical beams, which arise from the pleural line and spread down without fading to the edge of the screen. An increase in their number is associated with presence of edema. All the scans used in this study were acquired using a BK3000 ultrasound scanner (BK Ultrasound, Denmark) driving a 192-element 5:5 MHz wide linear transducer (10L2W, BK Ultrasound). The dynamic received focus technique was employed to generate the sequences. Six subjects, among those three patients after major surgery and three normal subjects, were scanned once and Six ultrasound sequences each containing 50 frames were acquired. The proposed algorithm was applied to all 300 in-vivo lung ultrasound images. The pleural line is first segmented on each image and then the B-line artifacts spreading down from the pleural line are detected and overlayed on the image. The resulting 300 images showed that the mean lateral distance between B-lines detected on images acquired from patients decreased by 20% in compare with that of normal subjects. Therefore, the method can be used as the basis of a method of automatically and qualitatively characterizing the distribution of B-lines.
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Cerveri, Pietro; Manzotti, Alfonso; Confalonieri, Norberto; Baroni, Guido
2014-12-01
Personalized resection guides (PRG) have been recently proposed in the domain of knee replacement, demonstrating clinical outcome similar or even superior to both manual and navigated interventions. Among the mandatory pre-surgical steps for PRG prototyping, the measurement of clinical landmarks (CL) on the bony surfaces is recognized as a key issue due to lack of standardized methodologies, operator-dependent variability and time expenditure. In this paper, we focus on the reliability and repeatability of an anterior-posterior axis, also known as Whiteside line (WL), of the distal femur proposing automatic surface processing and modeling methods aimed at overcoming some of the major concerns related to the manual identification of such CL on 2D images and 3D models. We show that the measurement of WL, exploiting the principle of mean-shifting surface curvature, is highly repeatable and coherent with clinical knowledge. Copyright © 2014 Elsevier Ltd. All rights reserved.
Horstkotte, Burkhard; Alonso, Juan Carlos; Miró, Manuel; Cerdà, Víctor
2010-01-15
An integrated analyzer based on the multisyringe flow injection analysis approach is proposed for the automated determination of dissolved oxygen in seawater. The entire Winkler method including precipitation of manganese(II) hydroxide, fixation of dissolved oxygen, dissolution of the oxidized manganese hydroxide precipitate, and generation of iodine and tri-iodide ion are in-line effected within the flow network. Spectrophotometric quantification of iodine and tri-iodide at the isosbestic wavelength of 466nm renders enhanced method reliability. The calibration function is linear up to 19mgL(-1) dissolved oxygen and an injection frequency of 17 per hour is achieved. The multisyringe system features a highly satisfying signal stability with repeatabilities of 2.2% RSD that make it suitable for continuous determination of dissolved oxygen in seawater. Compared to the manual starch-end-point titrimetric Winkler method and early reported automated systems, concentrations and consumption of reagents and sample are reduced up to hundredfold. The versatility of the multisyringe assembly was exploited in the implementation of an ancillary automatic batch-wise Winkler titrator using a single syringe of the module for accurate titration of the released iodine/tri-iodide with thiosulfate.
Application of industrial robots in automatic disassembly line of waste LCD displays
NASA Astrophysics Data System (ADS)
Wang, Sujuan
2017-11-01
In the automatic disassembly line of waste LCD displays, LCD displays are disassembled into plastic shells, metal shields, circuit boards, and LCD panels. Two industrial robots are used to cut metal shields and remove circuit boards in this automatic disassembly line. The functions of these two industrial robots, and the solutions to the critical issues of model selection, the interfaces with PLCs and the workflows were described in detail in this paper.
Automating usability of ATLAS Distributed Computing resources
NASA Astrophysics Data System (ADS)
Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration
2014-06-01
The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.
Automated liver segmentation using a normalized probabilistic atlas
NASA Astrophysics Data System (ADS)
Linguraru, Marius George; Li, Zhixi; Shah, Furhawn; Chin, See; Summers, Ronald M.
2009-02-01
Probabilistic atlases of anatomical organs, especially the brain and the heart, have become popular in medical image analysis. We propose the construction of probabilistic atlases which retain structural variability by using a size-preserving modified affine registration. The organ positions are modeled in the physical space by normalizing the physical organ locations to an anatomical landmark. In this paper, a liver probabilistic atlas is constructed and exploited to automatically segment liver volumes from abdominal CT data. The atlas is aligned with the patient data through a succession of affine and non-linear registrations. The overlap and correlation with manual segmentations are 0.91 (0.93 DICE coefficient) and 0.99 respectively. Little work has taken place on the integration of volumetric measures of liver abnormality to clinical evaluations, which rely on linear estimates of liver height. Our application measures the liver height at the mid-hepatic line (0.94 correlation with manual measurements) and indicates that its combination with volumetric estimates could assist the development of a noninvasive tool to assess hepatomegaly.
Automatic joint alignment measurements in pre- and post-operative long leg standing radiographs.
Goossen, A; Weber, G M; Dries, S P M
2012-01-01
For diagnosis or treatment assessment of knee joint osteoarthritis it is required to measure bone morphometry from radiographic images. We propose a method for automatic measurement of joint alignment from pre-operative as well as post-operative radiographs. In a two step approach we first detect and segment any implants or other artificial objects within the image. We exploit physical characteristics and avoid prior shape information to cope with the vast amount of implant types. Subsequently, we exploit the implant delineations to adapt the initialization and adaptation phase of a dedicated bone segmentation scheme using deformable template models. Implant and bone contours are fused to derive the final joint segmentation and thus the alignment measurements. We evaluated our method on clinical long leg radiographs and compared both the initialization rate, corresponding to the number of images successfully processed by the proposed algorithm, and the accuracy of the alignment measurement. Ground truth has been generated by an experienced orthopedic surgeon. For comparison a second reader reevaluated the measurements. Experiments on two sets of 70 and 120 digital radiographs show that 92% of the joints could be processed automatically and the derived measurements of the automatic method are comparable to a human reader for pre-operative as well as post-operative images with a typical error of 0.7° and correlations of r = 0.82 to r = 0.99 with the ground truth. The proposed method allows deriving objective measures of joint alignment from clinical radiographs. Its accuracy and precision are on par with a human reader for all evaluated measurements.
Robust automatic measurement of 3D scanned models for the human body fat estimation.
Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo
2015-03-01
In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.
A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures
NASA Astrophysics Data System (ADS)
Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.
2017-10-01
An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Research directions in large scale systems and decentralized control
NASA Technical Reports Server (NTRS)
Tenney, R. R.
1980-01-01
Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.
Using Machine Learning to Increase Research Efficiency: A New Approach in Environmental Sciences
USDA-ARS?s Scientific Manuscript database
Data collection has evolved from tedious in-person fieldwork to automatic data gathering from multiple sensor remotely. Scientist in environmental sciences have not fully exploited this data deluge, including legacy and new data, because the traditional scientific method is focused on small, high qu...
Diagnosis diagrams for passing signals on an automatic block signaling railway section
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.
2018-01-01
This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.
Exploiting the metabolism of PYC expressing HEK293 cells in fed-batch cultures.
Vallée, Cédric; Durocher, Yves; Henry, Olivier
2014-01-01
The expression of recombinant yeast pyruvate carboxylase (PYC) in animal cell lines was shown in previous studies to reduce significantly the formation of waste metabolites, although it has translated into mixed results in terms of improved cellular growth and productivity. In this work, we demonstrate that the unique phenotype of PYC expressing cells can be exploited through the application of a dynamic fed-batch strategy and lead to significant process enhancements. Metabolically engineered HEK293 cells stably producing human recombinant IFNα2b and expressing the PYC enzyme were cultured in batch and fed-batch modes. Compared to parental cells, the maximum cell density in batch was increased 1.5-fold and the culture duration was extended by 2.5 days, but the product yield was only marginally increased. Further improvements were achieved by developing and implementing a dynamic fed-batch strategy using a concentrated feed solution. The feeding was based on an automatic control-loop to maintain a constant glucose concentration. This strategy led to a further 2-fold increase in maximum cell density (up to 10.7×10(6)cells/ml) and a final product titer of 160mg/l, representing nearly a 3-fold yield increase compared to the batch process with the parental cell clone. Copyright © 2013 Elsevier B.V. All rights reserved.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
NASA Astrophysics Data System (ADS)
Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut
2005-04-01
Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.
Time-variant analysis of rotorcraft systems dynamics - An exploitation of vector processors
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.; Shareef, N. H.
1993-01-01
In this paper a generalized algorithmic procedure is presented for handling constraints in mechanical transmissions. The latter are treated as multibody systems of interconnected rigid/flexible bodies. The constraint Jacobian matrices are generated automatically and suitably updated in time, depending on the geometrical and kinematical constraint conditions describing the interconnection between shafts or gears. The type of constraints are classified based on the interconnection of the bodies by assuming that one or more points of contact exist between them. The effects due to elastic deformation of the flexible bodies are included by allowing each body element to undergo small deformations. The procedure is based on recursively formulated Kane's dynamical equations of motion and the finite element method, including the concept of geometrical stiffening effects. The method is implemented on an IBM-3090-600j vector processor with pipe-lining capabilities. A significant increase in the speed of execution is achieved by vectorizing the developed code in computationally intensive areas. An example consisting of two meshing disks rotating at high angular velocity is presented. Applications are intended for the study of the dynamic behavior of helicopter transmissions.
Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J
2016-08-01
Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
A User Study on Tactile Graphic Generation Methods
ERIC Educational Resources Information Center
Krufka, S. E.; Barner, K. E.
2006-01-01
Methods to automatically convert graphics into tactile representations have been recently investigated, creating either raised-line or relief images. In particular, we briefly review one raised-line method where important features are emphasized. This paper focuses primarily on the effects of such emphasis and on comparing both raised-line and…
Automatic Co-Registration of Multi-Temporal Landsat-8/OLI and Sentinel-2A/MSI Images
NASA Technical Reports Server (NTRS)
Skakun, S.; Roger, J.-C.; Vermote, E.; Justice, C.; Masek, J.
2017-01-01
Many applications in climate change and environmental and agricultural monitoring rely heavily on the exploitation of multi-temporal satellite imagery. Combined use of freely available Landsat-8 and Sentinel-2 images can offer high temporal frequency of about 1 image every 3-5 days globally.
Automatic sample Dewar for MX beam-line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charignon, T.; Tanchon, J.; Trollier, T.
2014-01-29
It is very common for crystals of large biological macromolecules to show considerable variation in quality of their diffraction. In order to increase the number of samples that are tested for diffraction quality before any full data collections at the ESRF*, an automatic sample Dewar has been implemented. Conception and performances of the Dewar are reported in this paper. The automatic sample Dewar has 240 samples capability with automatic loading/unloading ports. The storing Dewar is capable to work with robots and it can be integrated in a full automatic MX** beam-line. The samples are positioned in the front of themore » loading/unloading ports with and automatic rotating plate. A view port has been implemented for data matrix camera reading on each sample loaded in the Dewar. At last, the Dewar is insulated with polyurethane foam that keeps the liquid nitrogen consumption below 1.6 L/h. At last, the static insulation also makes vacuum equipment and maintenance unnecessary. This Dewar will be useful for increasing the number of samples tested in synchrotrons.« less
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
On-line retrieval system design is discussed in the two papers which make up Part Five of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Prototype On-Line Document Retrieval System" by D. Williamson and R. Williamson outlines a design for a SMART on-line document retrieval system…
Military applications of automatic speech recognition and future requirements
NASA Technical Reports Server (NTRS)
Beek, Bruno; Cupples, Edward J.
1977-01-01
An updated summary of the state-of-the-art of automatic speech recognition and its relevance to military applications is provided. A number of potential systems for military applications are under development. These include: (1) digital narrowband communication systems; (2) automatic speech verification; (3) on-line cartographic processing unit; (4) word recognition for militarized tactical data system; and (5) voice recognition and synthesis for aircraft cockpit.
New computer system simplifies programming of mathematical equations
NASA Technical Reports Server (NTRS)
Reinfelds, J.; Seitz, R. N.; Wood, L. H.
1966-01-01
Automatic Mathematical Translator /AMSTRAN/ permits scientists or engineers to enter mathematical equations in their natural mathematical format and to obtain an immediate graphical display of the solution. This automatic-programming, on-line, multiterminal computer system allows experienced programmers to solve nonroutine problems.
Filmless Radiographic System for Field Use.
1988-02-12
electronic circuits. The receptor assembly contains the x-ray grid , cassette holder or EPID panel and the sensor panel for the automatic exposure control... Grid lines were still resolved. The appearance of fine details was altered but most of the diagnostic value was retained. The choice of the screens must...A 6:1, 60 lines per inch grid is mounted to the frame on the x-ray entrance side. II.F.4. Ion Chamber The ion chamber is used by the automatic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuz'micheva, K. I.; Merzlyakov, A. S.; Fokin, G. G.
2013-05-15
The reasons for circuit-breaker failures during repeated disconnection of 500 - 750 kV overhead lines with shunt reactors in a cycle of unsuccessful three-phase automatic reconnection (TARC) are analyzed. Recommendations are made for increasing the operating reliability of power transmission lines with shunt reactors when there is unsuccessful reconnection.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
Yang, Deming; Xu, Zhenming
2011-09-15
Crushing and separating technology is widely used in waste printed circuit boards (PCBs) recycling process. A set of automatic line without negative impact to environment for recycling waste PCBs was applied in industry scale. Crushed waste PCBs particles grinding and classification cyclic system is the most important part of the automatic production line, and it decides the efficiency of the whole production line. In this paper, a model for computing the process of the system was established, and matrix analysis method was adopted. The result showed that good agreement can be achieved between the simulation model and the actual production line, and the system is anti-jamming. This model possibly provides a basis for the automatic process control of waste PCBs production line. With this model, many engineering problems can be reduced, such as metals and nonmetals insufficient dissociation, particles over-pulverizing, incomplete comminuting, material plugging and equipment fever. Copyright © 2011 Elsevier B.V. All rights reserved.
Automatically Detecting Likely Edits in Clinical Notes Created Using Automatic Speech Recognition
Lybarger, Kevin; Ostendorf, Mari; Yetisgen, Meliha
2017-01-01
The use of automatic speech recognition (ASR) to create clinical notes has the potential to reduce costs associated with note creation for electronic medical records, but at current system accuracy levels, post-editing by practitioners is needed to ensure note quality. Aiming to reduce the time required to edit ASR transcripts, this paper investigates novel methods for automatic detection of edit regions within the transcripts, including both putative ASR errors but also regions that are targets for cleanup or rephrasing. We create detection models using logistic regression and conditional random field models, exploring a variety of text-based features that consider the structure of clinical notes and exploit the medical context. Different medical text resources are used to improve feature extraction. Experimental results on a large corpus of practitioner-edited clinical notes show that 67% of sentence-level edits and 45% of word-level edits can be detected with a false detection rate of 15%. PMID:29854187
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Automatic Classification of Question & Answer Discourse Segments from Teacher's Speech in Classrooms
ERIC Educational Resources Information Center
Blanchard, Nathaniel; D'Mello, Sidney; Olney, Andrew M.; Nystrand, Martin
2015-01-01
Question-answer (Q&A) is fundamental for dialogic instruction, an important pedagogical technique based on the free exchange of ideas and open-ended discussion. Automatically detecting Q&A is key to providing teachers with feedback on appropriate use of dialogic instructional strategies. In line with this, this paper studies the…
Ibraheem; Hasan, Naimul; Hussein, Arkan Ahmed
2014-01-01
This Paper presents the design of decentralized automatic generation controller for an interconnected power system using PID, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The designed controllers are tested on identical two-area interconnected power systems consisting of thermal power plants. The area interconnections between two areas are considered as (i) AC tie-line only (ii) Asynchronous tie-line. The dynamic response analysis is carried out for 1% load perturbation. The performance of the intelligent controllers based on GA and PSO has been compared with the conventional PID controller. The investigations of the system dynamic responses reveal that PSO has the better dynamic response result as compared with PID and GA controller for both type of area interconnection.
BROWSER: An Automatic Indexing On-Line Text Retrieval System. Annual Progress Report.
ERIC Educational Resources Information Center
Williams, J. H., Jr.
The development and testing of the Browsing On-line With Selective Retrieval (BROWSER) text retrieval system allowing a natural language query statement and providing on-line browsing capabilities through an IBM 2260 display terminal is described. The prototype system contains data bases of 25,000 German language patent abstracts, 9,000 English…
Going mobile with a multiaccess service for the management of diabetic patients.
Lanzola, Giordano; Capozzi, Davide; D'Annunzio, Giuseppe; Ferrari, Pietro; Bellazzi, Riccardo; Larizza, Cristiana
2007-09-01
Diabetes mellitus is one of the chronic diseases exploiting the largest number of telemedicine systems. Our research group has been involved since 1996 in two projects funded by the European Union proposing innovative architectures and services according to the best current medical practices and advances in the information technology area. We propose an enhanced architecture for telemedicine giving rise to a multitier application. The lower tier is represented by a mobile phone hosting the patient unit able to acquire data and provide first-level advice to the patient. The patient unit also facilitates interaction with the health care center, representing the higher tier, by automatically uploading data and receiving back any therapeutic plan supplied by the physician. On the patient's side the mobile phone exploits Bluetooth technology and therefore acts as a hub for a wireless network, possibly including several devices in addition to the glucometer. A new system architecture based on mobile technology is being used to implement several prototypes for assessing its functionality. A subsequent effort will be undertaken to exploit the new system within a pilot study for the follow-up of patients cared at a major hospital located in northern Italy. We expect that the new architecture will enhance the interaction between patient and caring physician, simplifying and improving metabolic control. In addition to sending glycemic data to the caring center, we also plan to automatically download the therapeutic protocols provided by the physician to the insulin pump and collect data from multiple sensors.
Automatic programming via iterated local search for dynamic job shop scheduling.
Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen
2015-01-01
Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.
USSR Report: Machine Tools and Metalworking Equipment.
1986-01-23
between satellite stop and the camshaft of the programer unit. The line has 23 positions including 12 automatic ones. Specification of line Number...technological, processes, automated research, etc.) are as follows.: a monochannel based on a shared trunk line, ring, star and tree (polychannel...line or ring networks based on decentralized control of data exchange between subscribers are very robust. A tree -form network has star structure
NASA Astrophysics Data System (ADS)
de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan
2017-10-01
The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.
Hybrid Automatic Building Interpretation System
NASA Astrophysics Data System (ADS)
Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.
2011-09-01
HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.
Automatic indexing in a drug information portal.
Sakji, Saoussen; Letord, Catherine; Dahamna, Badisse; Kergourlay, Ivan; Pereira, Suzanne; Joubert, Michel; Darmoni, Stéfan
2009-01-01
The objective of this work is to create a bilingual (French/English) Drug Information Portal (DIP), in a multi-terminological context and to emphasize its exploitation by an ATC automatic indexing allowing having more pertinent information about substances, organs or systems on which drugs act and their therapeutic and chemical characteristics. The development of the DIP was based on the CISMeF portal, which catalogues and indexes the most important and quality-controlled sources of institutional health information in French. DIP has created specific functionalities and uses specific drugs terminologies such as the ATC classification which used to automatic index the DIP resources. DIP is the result of collaboration between the CISMeF team and the VIDAL Company, specialized in drug information. DIP is conceived to facilitate the user information retrieval. The ATC automatic indexing provided relevant results in 76% of cases. Using multi-terminological context and in the framework of the drug field, indexing drugs with the appropriate codes or/and terms revealed to be very important to have the appropriate information storage and retrieval. The main challenge in the coming year is to increase the accuracy of the approach.
Genetic Evolution of Shape-Altering Programs for Supersonic Aerodynamics
NASA Technical Reports Server (NTRS)
Kennelly, Robert A., Jr.; Bencze, Daniel P. (Technical Monitor)
2002-01-01
Two constrained shape optimization problems relevant to aerodynamics are solved by genetic programming, in which a population of computer programs evolves automatically under pressure of fitness-driven reproduction and genetic crossover. Known optimal solutions are recovered using a small, naive set of elementary operations. Effectiveness is improved through use of automatically defined functions, especially when one of them is capable of a variable number of iterations, even though the test problems lack obvious exploitable regularities. An attempt at evolving new elementary operations was only partially successful.
DNA Assembly Line for Nano-Construction
Oleg Gang
2017-12-09
Building on the idea of using DNA to link up nanoparticles scientists at Brookhaven National Lab have designed a molecular assembly line for high-precision nano-construction. Nanofabrication is essential for exploiting the unique properties of nanoparticl
Development of on line automatic separation device for apple and sleeve
NASA Astrophysics Data System (ADS)
Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang
2018-04-01
Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.
Parallel line analysis: multifunctional software for the biomedical sciences
NASA Technical Reports Server (NTRS)
Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.
1990-01-01
An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.
ERIC Educational Resources Information Center
Rose, Carolyn; Wang, Yi-Chia; Cui, Yue; Arguello, Jaime; Stegmann, Karsten; Weinberger, Armin; Fischer, Frank
2008-01-01
In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners' interactions is a…
Optical Fiber On-Line Detection System for Non-Touch Monitoring Roller Shape
NASA Astrophysics Data System (ADS)
Guo, Y.; Wang, Y. T.
2006-10-01
Basing on the principle of reflective displacement fiber-optic sensor, a high accuracy non-touch on-line optical fiber measurement system for roller shape is presented. The principle and composition of the detection system and the operation process are expatiated also. By using a novel probe of three optical fibers in equal transverse space, the effects of fluctuations in the light source, reflective changing of target surface and the intensity losses in the fiber lines are automatically compensated. Meantime, an optical fiber sensor model of correcting static error based on BP artificial neural network (ANN) is set up. Also by using interpolation method and value filtering to process the signals, effectively reduce the influence of random noise and the vibration of the roller bearing. So enhance the accuracy and resolution remarkably. Experiment proves that the accuracy of the system reach to the demand of practical production process, it provides a new method for the high speed, accurate and automatic on line detection of the mill roller shape.
Automatic design of magazine covers
NASA Astrophysics Data System (ADS)
Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.
2012-03-01
In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.
Liu, Ying; Lita, Lucian Vlad; Niculescu, Radu Stefan; Mitra, Prasenjit; Giles, C Lee
2008-11-06
Owing to new advances in computer hardware, large text databases have become more prevalent than ever.Automatically mining information from these databases proves to be a challenge due to slow pattern/string matching techniques. In this paper we present a new, fast multi-string pattern matching method based on the well known Aho-Chorasick algorithm. Advantages of our algorithm include:the ability to exploit the natural structure of text, the ability to perform significant character shifting, avoiding backtracking jumps that are not useful, efficiency in terms of matching time and avoiding the typical "sub-string" false positive errors.Our algorithm is applicable to many fields with free text, such as the health care domain and the scientific document field. In this paper, we apply the BSS algorithm to health care data and mine hundreds of thousands of medical concepts from a large Electronic Medical Record (EMR) corpora simultaneously and efficiently. Experimental results show the superiority of our algorithm when compared with the top of the line multi-string matching algorithms.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Ribeiro, David S M; Prior, João A V; Taveira, Christian J M; Mendes, José M A F S; Santos, João L M
2011-06-15
In this work, and for the first time, it was developed an automatic and fast screening miniaturized flow system for the toxicological control of glibenclamide in beverages, with application in forensic laboratory investigations, and also, for the chemical control of commercially available pharmaceutical formulations. The automatic system exploited the multipumping flow (MPFS) concept and allowed the implementation of a new glibenclamide determination method based on the fluorometric monitoring of the drug in acidic medium (λ(ex)=301 nm; λ(em)=404 nm), in the presence of an anionic surfactant (SDS), promoting an organized micellar medium to enhance the fluorometric measurements. The developed approach assured good recoveries in the analysis of five spiked alcoholic beverages. Additionally, a good agreement was verified when comparing the results obtained in the determination of glibenclamide in five commercial pharmaceutical formulations by the proposed method and by the pharmacopoeia reference procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
An adipose segmentation and quantification scheme for the intra abdominal region on minipigs
NASA Astrophysics Data System (ADS)
Engholm, Rasmus; Dubinskiy, Aleksandr; Larsen, Rasmus; Hanson, Lars G.; Christoffersen, Berit Østergaard
2006-03-01
This article describes a method for automatic segmentation of the abdomen into three anatomical regions: subcutaneous, retroperitoneal and visceral. For the last two regions the amount of adipose tissue (fat) is quantified. According to recent medical research, the distinction between retroperitoneal and visceral fat is important for studying metabolic syndrome, which is closely related to diabetes. However previous work has neglected to address this point, treating the two types of fat together. We use T1-weighted three-dimensional magnetic resonance data of the abdomen of obese minipigs. The pigs were manually dissected right after the scan, to produce the "ground truth" segmentation. We perform automatic segmentation on a representative slice, which on humans has been shown to correlate with the amount of adipose tissue in the abdomen. The process of automatic fat estimation consists of three steps. First, the subcutaneous fat is removed with a modified active contour approach. The energy formulation of the active contour exploits the homogeneous nature of the subcutaneous fat and the smoothness of the boundary. Subsequently the retroperitoneal fat located around the abdominal cavity is separated from the visceral fat. For this, we formulate a cost function on a contour, based on intensities, edges, distance to center and smoothness, so as to exploit the properties of the retroperitoneal fat. We then globally optimize this function using dynamic programming. Finally, the fat content of the retroperitoneal and visceral regions is quantified based on a fuzzy c-means clustering of the intensities within the segmented regions. The segmentation proved satisfactory by visual inspection, and closely correlated with the manual dissection data. The correlation was 0.89 for the retroperitoneal fat, and 0.74 for the visceral fat.
Automatic Fastening Large Structures: a New Approach
NASA Technical Reports Server (NTRS)
Lumley, D. F.
1985-01-01
The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.
9 Is Always on Top: Assessing the Automaticity of Synaesthetic Number-Forms
ERIC Educational Resources Information Center
Jarick, Michelle; Dixon, Michael J.; Smilek, Daniel
2011-01-01
For number-form synaesthetes, digits occupy idiosyncratic spatial locations. Atypical to the mental number line that extends horizontally, the synaesthete (L) experiences the numbers 1-10 vertically. We used a spatial cueing task to demonstrate that L's attention could be automatically directed to locations within her number-space--being faster to…
Van De Gucht, Tim; Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen; Saeys, Wouter
2017-10-08
Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system's potential adoption rate.
Silva, Alessandro Jose Nunes da; Almeida, Ildeberto Muniz de; Vilela, Rodolfo Andrade de Gouveia; Mendes, Renata Wey Berti; Hurtado, Sandra Lorena Beltran
2018-05-10
The Brazilian electricity sector has recorded high work-related mortality rates that have been associated with outsourcing, used to cut costs. In order to decrease the power outage time for consumers, the industry adopted the automatic circuit recloser as the technical solution. The device has hazardous implications for maintenance workers. The aim of this study was to analyze the origins and consequences of work accidents in power systems with automatic circuit recloser, using the Accident Analysis and Prevention (AAP) model. The AAP model was used to investigate two work accidents, aimed to explore the events' organizational origins. Case 1 - when changing a deenergized secondary line, a worker received a shock from the energized primary cable (13.8kV). The system reclosed three times, causing severe injury to the worker (amputation of a lower limb). Case 2 - a fatal work accident occurred during installation of a new crosshead on a partially insulated energized line. The tip of a metal cross arm section strap touched the energized secondary line and electrocuted the maintenance operator. The circuit breaker component of the automatic circuit recloser failed. The analyses revealed how business management logic can participate in the root causes of work accidents through failures in maintenance management, outsourced workforce management, and especially safety management in systems with reclosers. Decisions to adopt automation to guarantee power distribution should not overlook the risks to workers in overhead power lines or fail to acknowledge the importance of ensuring safe conditions.
Superville, Pierre-Jean; Pižeta, Ivanka; Omanović, Dario; Billon, Gabriel
2013-08-15
Based on automatic on-line measurements on the Deûle River that showed daily variation of a peak around -0.56V (vs Ag|AgCl 3M), identification of Reduced Sulphur Species (RSS) in oxic waters was performed applying cathodic stripping voltammetry (CSV) with the hanging mercury drop electrode (HMDE). Pseudopolarographic studies accompanied with increasing concentrations of copper revealed the presence of elemental sulphur S(0), thioacetamide (TA) and reduced glutathione (GSH) as the main sulphur compounds in the Deûle River. In order to resolve these three species, a simple procedure was developed and integrated in an automatic on-line monitoring system. During one week monitoring with hourly measurements, GSH and S(0) exhibited daily cycles whereas no consequential pattern was observed for TA. Copyright © 2013 Elsevier B.V. All rights reserved.
Hartmann, Matthias
2017-02-01
The spatial representation of ordinal sequences (numbers, time, tones) seems to be a fundamental cognitive property. While an automatic association between horizontal space and pitch height (left-low pitch, right-high pitch) is constantly reported in musicians, the evidence for such an association in non-musicians is mixed. In this study, 20 non-musicians performed a line bisection task while listening to irrelevant high- and low-pitched tones and white noise (control condition). While pitch height had no influence on the final bisection point, participants' movement trajectories showed systematic biases: When approaching the line and touching the line for the first time (initial bisection point), the mouse cursor was directed more rightward for high-pitched tones compared to low-pitched tones and noise. These results show that non-musicians also have a subtle but nevertheless automatic association between pitch height and the horizontal space. This suggests that spatial-musical associations do not necessarily depend on constant sensorimotor experiences (as it is the case for musicians) but rather reflect the seemingly inescapable tendency to represent ordinal information on a horizontal line.
Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui
2015-01-01
Introduction The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Methods Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. Results We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. Conclusions We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. PMID:25670753
Going Mobile with a Multiaccess Service for the Management of Diabetic Patients
Lanzola, Giordano; Capozzi, Davide; D'Annunzio, Giuseppe; Ferrari, Pietro; Bellazzi, Riccardo; Larizza, Cristiana
2007-01-01
Background Diabetes mellitus is one of the chronic diseases exploiting the largest number of telemedicine systems. Our research group has been involved since 1996 in two projects funded by the European Union proposing innovative architectures and services according to the best current medical practices and advances in the information technology area. Method We propose an enhanced architecture for telemedicine giving rise to a multitier application. The lower tier is represented by a mobile phone hosting the patient unit able to acquire data and provide first-level advice to the patient. The patient unit also facilitates interaction with the health care center, representing the higher tier, by automatically uploading data and receiving back any therapeutic plan supplied by the physician. On the patient's side the mobile phone exploits Bluetooth technology and therefore acts as a hub for a wireless network, possibly including several devices in addition to the glucometer. Results A new system architecture based on mobile technology is being used to implement several prototypes for assessing its functionality. A subsequent effort will be undertaken to exploit the new system within a pilot study for the follow-up of patients cared at a major hospital located in northern Italy. Conclusion We expect that the new architecture will enhance the interaction between patient and caring physician, simplifying and improving metabolic control. In addition to sending glycemic data to the caring center, we also plan to automatically download the therapeutic protocols provided by the physician to the insulin pump and collect data from multiple sensors. PMID:19885142
Self-propelled automatic chassis of Lunokhod-1: History of creation in episodes
NASA Astrophysics Data System (ADS)
Malenkov, Mikhail
2016-03-01
This report reviews the most important episodes in the history of designing the self-propelled automatic chassis of the first mobile extraterrestrial vehicle in the world, Lunokhod-1. The review considers the issues in designing moon rovers, their essential features, and the particular construction properties of their systems, mechanisms, units, and assemblies. It presents the results of exploiting the chassis of Lunokhod-1 and Lunokhod-2. Analysis of the approaches utilized and engineering solutions reveals their value as well as the consequences of certain defects.
Harvesting geographic features from heterogeneous raster maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi
2010-11-01
Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.
A Generalized Fraction: An Entity Smaller than One on the Mental Number Line
ERIC Educational Resources Information Center
Kallai, Arava Y.; Tzelgov, Joseph
2009-01-01
The representation of fractions in long-term memory (LTM) was investigated by examining the automatic processing of such numbers in a physical comparison task, and their intentional processing in a numerical comparison task. The size congruity effect (SiCE) served as a marker of automatic processing and consequently as an indicator of the access…
Improving integrity of on-line grammage measurement with traceable basic calibration.
Kangasrääsiö, Juha
2010-07-01
The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
The Surveillance and On-demand Sentinel-1 SBAS Services on the Geohazards Exploitation Platforms
NASA Astrophysics Data System (ADS)
Casu, F.; de Luca, C.; Zinno, I.; Manunta, M.; Lanari, R.
2017-12-01
The Geohazards Exploitation Platform (GEP) is an ESA R&D activity of the EO ground segment to demonstrate the benefit of new technologies for large scale processing of EO data. GEP aims at providing on-demand processing services for specific user needs, as well as systematic processing services to address the need of the geohazards community for common information layers and, finally, to integrate newly developed processors for scientists and other expert users. In this context, a crucial role is played by the recently launched Sentinel-1 (S1) constellation that, with its global acquisition policy, has flooded the scientific community with a huge amount of data acquired over large part of the Earth on a regular basis (down to 6-days with both Sentinel-1A and 1B passes). The Sentinel-1 data, as part of the European Copernicus program, are openly and freely accessible, thus fostering their use for the development of automated and systematic tools for Earth surface monitoring. In particular, due to their specific SAR Interferometry (InSAR) design, Sentinel-1 satellites can be exploited to build up operational services for the easy and rapid generation of advanced InSAR products useful for risk management and natural hazard monitoring. In this work we present the activities carried out for the development, integration, and deployment of two SBAS Sentinel-1 services of CNR-IREA within the GEP framework, namely the Surveillance and On-demand services. The Surveillance service consists on the systematic and automatic processing of Sentinel-1 data over selected Areas of Interest (AoI) to generate updated surface displacement time series via the SBAS-InSAR algorithm. We built up a system that is automatically triggered by every new Sentinel-1 acquisition over the AoI, once it is available on the S1 catalogue. Then, the system processes the new acquisitions only, thus saving storage space and computing time. The processing, which relies on the Parallel version of the SBAS (P-SBAS) chain, allows us to effectively perform massive, systematic and automatic analysis of S1 SAR data. It is worth noting that the SBAS Sentinel-1 services on GEP represent the core of the EPOSAR service, which will deliver S1 displacement time series of Earth surface for the European Plate Observing System (EPOS) Research Infrastructure community.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui
2015-05-01
The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
East Europe Report, Economic and Industrial Affairs, No. 2431
1983-08-03
of automatic control systems in this branch of transportation, as well as in reservations and ticket sales. An agreement on research is...cooperative, the labor is organized in common and the remuneration is done in relation to the quantity and quality of the labor performed, the output...use in the state supply; f) The rational and efficient exploitation of the irrigation facilities, the drainage facilities and those for control of
Text-image alignment for historical handwritten documents
NASA Astrophysics Data System (ADS)
Zinger, S.; Nerbonne, J.; Schomaker, L.
2009-01-01
We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.
Generalized procrustean image deformation for subtraction of mammograms
NASA Astrophysics Data System (ADS)
Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.
1999-05-01
This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.
Development report: Automatic System Test and Calibration (ASTAC) equipment
NASA Technical Reports Server (NTRS)
Thoren, R. J.
1981-01-01
A microcomputer based automatic test system was developed for the daily performance monitoring of wind energy system time domain (WEST) analyzer. The test system consists of a microprocessor based controller and hybrid interface unit which are used for inputing prescribed test signals into all WEST subsystems and for monitoring WEST responses to these signals. Performance is compared to theoretically correct performance levels calculated off line on a large general purpose digital computer. Results are displayed on a cathode ray tube or are available from a line printer. Excessive drift and/or lack of repeatability of the high speed analog sections within WEST is easily detected and the malfunctioning hardware identified using this system.
Automated pharmaceutical tablet coating layer evaluation of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G.; Buchsbaum, Andreas
2015-03-01
Film coating of pharmaceutical tablets is often applied to influence the drug release behaviour. The coating characteristics such as thickness and uniformity are critical quality parameters, which need to be precisely controlled. Optical coherence tomography (OCT) shows not only high potential for off-line quality control of film-coated tablets but also for in-line monitoring of coating processes. However, an in-line quality control tool must be able to determine coating thickness measurements automatically and in real-time. This study proposes an automatic thickness evaluation algorithm for bi-convex tables, which provides about 1000 thickness measurements within 1 s. Beside the segmentation of the coating layer, optical distortions due to refraction of the beam by the air/coating interface are corrected. Moreover, during in-line monitoring the tablets might be in oblique orientation, which needs to be considered in the algorithm design. Experiments were conducted where the tablet was rotated to specified angles. Manual and automatic thickness measurements were compared for varying coating thicknesses, angles of rotations, and beam displacements (i.e. lateral displacement between successive depth scans). The automatic thickness determination algorithm provides highly accurate results up to an angle of rotation of 30°. The computation time was reduced to 0.53 s for 700 thickness measurements by introducing feasibility constraints in the algorithm.
Song, Dandan; Li, Ning; Liao, Lejian
2015-01-01
Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Truck circuits diagnosis for railway lines equipped with an automatic block signalling system
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Muscai, C.; Răduca, E.; Piroi, F.
2018-01-01
This work presents a diagnosis method for detecting track circuits failures on a railway traffic line equipped with an Automatic Block Signalling installation. The diagnosis method uses the installation’s electrical schemas, based on which a series of diagnosis charts have been created. Further, the diagnosis charts were used to develop a software package, CDCBla, which substantially contributes to reducing the diagnosis time and human error during failure remedies. The proposed method can also be used as a training package for the maintenance staff. Since the diagnosis method here does not need signal or measurement inputs, using it does not necessitate additional IT knowledge and can be deployed on a mobile computing device (tablet, smart phone).
NASA Technical Reports Server (NTRS)
Begni, G.; BOISSIN; Desachy, M. J.; PERBOS
1984-01-01
The geometric accuray of LANDSAT TM raw data of Toulouse (France) raw data of Mississippi, and preprocessed data of Mississippi was examined using a CDC computer. Analog images were restituted on the VIZIR SEP device. The methods used for line to line and band to band registration are based on automatic correlation techniques and are widely used in automated image to image registration at CNES. Causes of intraband and interband misregistration are identified and statistics are given for both line to line and band to band misregistration.
Automatic Blocking Of QR and LU Factorizations for Locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Q; Kennedy, K; You, H
2004-03-26
QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less
An Electronic System for Ultra-low Power Hearing Implants
2013-02-15
analyzers [1], [2], useful in several hearing systems. 4) We have designed and built a lithium - ion battery -recharging circuit that exploits a novel analog...control strategy with a tanh-like transconductance amplifier to automatically cause the charging in of a lithium - ion battery to transition from
Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach
NASA Astrophysics Data System (ADS)
Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele
2012-09-01
The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.
Walenski, Matthew; Swinney, David
2009-01-01
The central question underlying this study revolves around how children process co-reference relationships—such as those evidenced by pronouns (him) and reflexives (himself)—and how a slowed rate of speech input may critically affect this process. Previous studies of child language processing have demonstrated that typical language developing (TLD) children as young as 4 years of age process co-reference relations in a manner similar to adults on-line. In contrast, off-line measures of pronoun comprehension suggest a developmental delay for pronouns (relative to reflexives). The present study examines dependency relations in TLD children (ages 5–13) and investigates how a slowed rate of speech input affects the unconscious (on-line) and conscious (off-line) parsing of these constructions. For the on-line investigations (using a cross-modal picture priming paradigm), results indicate that at a normal rate of speech TLD children demonstrate adult-like syntactic reflexes. At a slowed rate of speech the typical language developing children displayed a breakdown in automatic syntactic parsing (again, similar to the pattern seen in unimpaired adults). As demonstrated in the literature, our off-line investigations (sentence/picture matching task) revealed that these children performed much better on reflexives than on pronouns at a regular speech rate. However, at the slow speech rate, performance on pronouns was substantially improved, whereas performance on reflexives was not different than at the regular speech rate. We interpret these results in light of a distinction between fast automatic processes (relied upon for on-line processing in real time) and conscious reflective processes (relied upon for off-line processing), such that slowed speech input disrupts the former, yet improves the latter. PMID:19343495
NASA Astrophysics Data System (ADS)
Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang
2018-05-01
Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.
Sannino, Giovanna; De Falco, Ivanoe; De Pietro, Giuseppe
2014-06-01
Real-time Obstructive Sleep Apnea (OSA) episode detection and monitoring are important for society in terms of an improvement in the health of the general population and of a reduction in mortality and healthcare costs. Currently, to diagnose OSA patients undergo PolySomnoGraphy (PSG), a complicated and invasive test to be performed in a specialized center involving many sensors and wires. Accordingly, each patient is required to stay in the same position throughout the duration of one night, thus restricting their movements. This paper proposes an easy, cheap, and portable approach for the monitoring of patients with OSA, which collects single-channel ElectroCardioGram (ECG) data only. It is easy to perform from the patient's point of view because only one wearable sensor is required, so the patient is not restricted to keeping the same position all night long, and the detection and monitoring can be carried out in any place through the use of a mobile device. Our approach is based on the automatic extraction, from a database containing information about the monitored patient, of explicit knowledge in the form of a set of IF…THEN rules containing typical parameters derived from Heart Rate Variability (HRV) analysis. The extraction is carried out off-line by means of a Differential Evolution algorithm. This set of rules can then be exploited in the real-time mobile monitoring system developed at our Laboratory: the ECG data is gathered by a wearable sensor and sent to a mobile device, where it is processed in real time. Subsequently, HRV-related parameters are computed from this data, and, if their values activate some of the rules describing the occurrence of OSA, an alarm is automatically produced. This approach has been tested on a well-known literature database of OSA patients. The numerical results show its effectiveness in terms of accuracy, sensitivity, and specificity, and the achieved sets of rules evidence the user-friendliness of the approach. Furthermore, the method is compared against other well known classifiers, and its discrimination ability is shown to be higher. Copyright © 2014 Elsevier Inc. All rights reserved.
ADMAP (automatic data manipulation program)
NASA Technical Reports Server (NTRS)
Mann, F. I.
1971-01-01
Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.
NASA Astrophysics Data System (ADS)
Ohnuma, Hidetoshi; Kawahira, Hiroichi
1998-09-01
An automatic alternative phase shift mask (PSM) pattern layout tool has been newly developed. This tool is dedicated for embedded DRAM in logic device to shrink gate line width with improving line width controllability in lithography process with a design rule below 0.18 micrometers by the KrF excimer laser exposure. The tool can crete Levenson type PSM used being coupled with a binary mask adopting a double exposure method for positive photo resist. By using graphs, this tool automatically creates alternative PSM patterns. Moreover, it does not give any phase conflicts. By adopting it to actual embedded DRAM in logic cells, we have provided 0.16 micrometers gate resist patterns at both random logic and DRAM areas. The patterns were fabricated using two masks with the double exposure method. Gate line width has been well controlled under a practical exposure-focus window.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Source Lines Counter (SLiC) Version 4.0
NASA Technical Reports Server (NTRS)
Monson, Erik W.; Smith, Kevin A.; Newport, Brian J.; Gostelow, Roli D.; Hihn, Jairus M.; Kandt, Ronald K.
2011-01-01
Source Lines Counter (SLiC) is a software utility designed to measure software source code size using logical source statements and other common measures for 22 of the programming languages commonly used at NASA and the aerospace industry. Such metrics can be used in a wide variety of applications, from parametric cost estimation to software defect analysis. SLiC has a variety of unique features such as automatic code search, automatic file detection, hierarchical directory totals, and spreadsheet-compatible output. SLiC was written for extensibility; new programming language support can be added with minimal effort in a short amount of time. SLiC runs on a variety of platforms including UNIX, Windows, and Mac OSX. Its straightforward command-line interface allows for customization and incorporation into the software build process for tracking development metrics. T
ELSA: An integrated, semi-automated nebular abundance package
NASA Astrophysics Data System (ADS)
Johnson, Matthew D.; Levitt, Jesse S.; Henry, Richard B. C.; Kwitter, Karen B.
We present ELSA, a new modular software package, written in C, to analyze and manage spectroscopic data from emission-line objects. In addition to calculating plasma diagnostics and abundances from nebular emission lines, the software provides a number of convenient features including the ability to ingest logs produced by IRAF's splot task, to semi-automatically merge spectra in different wavelength ranges, and to automatically generate various data tables in machine-readable or LaTeX format. ELSA features a highly sophisticated interstellar reddening correction scheme that takes into account temperature and density effects as well as He II contamination of the hydrogen Balmer lines. Abundance calculations are performed using a 5-level atom approximation with recent atomic data, based on R. Henry's ABUN program. Downloading and detailed documentation for all aspects of ELSA are available at the following URL:
Development of ATC for High Speed and High Density Commuter Line
NASA Astrophysics Data System (ADS)
Okutani, Tamio; Nakamura, Nobuyuki; Araki, Hisato; Irie, Shouji; Osa, Hiroki; Sano, Minoru; Ikeda, Keigo; Ozawa, Hiroyuki
A new ATC (Automatic Train Control) system has been developed with solutions to realize short train headway by assured braking utilizing digital data transmission via rails; the digital data for the ATP (Automatic Train Protection) function; and to achieve EMC features for both AC and DC sections. The DC section is of the unprecedented DC traction power supply system utilizing IGBT PWM converter at all DC substations. Within the AC section, train traction force is controlled by PWM converter/inverters. The carrier frequencies of the digital data signals and chopping frequency of PWM traction power converters on-board are decided via spectral analysis of noise up to degraded mode cases of equipment. Developed system was equipped to the Tukuba Express Line, new commuter line of Tokyo metropolitan area, and opened since Aug. 2005.
Modeling of information on the impact of mining exploitation on bridge objects in BIM
NASA Astrophysics Data System (ADS)
Bętkowski, Piotr
2018-04-01
The article discusses the advantages of BIM (Building Information Modeling) technology in the management of bridge infrastructure on mining areas. The article shows the problems with information flow in the case of bridge objects located on mining areas and the advantages of proper information management, e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities, cost reduction of damage removal and preventive actions, improvement of atmosphere for mining exploitation, improvement of the relationship between the manager of the bridge and the mine. Traditional model of managing bridge objects on mining areas has many disadvantages, which are discussed in this article. These disadvantages include among others: duplication of information about the object, lack of correlation in investments due to lack of information flow between bridge manager and mine, limited assessment possibilities of damage propagation on technical condition and construction resistance to mining influences.
Report of the President’s Task Force on Aircraft Crew Complement
1981-07-02
ALPA - Air Line Pilots Association APA - Allied Pilots Association ASRS Aviation Safety Reporting System ATARS Automatic Traffic Advisory and...capability significantly. The complementary Automatic Traffic Advisory and Resolution Service ( ATARS ) will provide collision avoidance advisories and...resolution. The main purpose of DABS/ ATARS is to detect traffic and to provide aircraft escape- maneuver advisories in adjoining ATC sectors. G/A pilots
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
Nonlinear Krylov and moving nodes in the method of lines
NASA Astrophysics Data System (ADS)
Miller, Keith
2005-11-01
We report on some successes and problem areas in the Method of Lines from our work with moving node finite element methods. First, we report on our "nonlinear Krylov accelerator" for the modified Newton's method on the nonlinear equations of our stiff ODE solver. Since 1990 it has been robust, simple, cheap, and automatic on all our moving node computations. We publicize further trials with it here because it should be of great general usefulness to all those solving evolutionary equations. Second, we discuss the need for reliable automatic choice of spatially variable time steps. Third, we discuss the need for robust and efficient iterative solvers for the difficult linearized equations (Jx=b) of our stiff ODE solver. Here, the 1997 thesis of Zulu Xaba has made significant progress.
Automatic Energy Schemes for High Performance Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundriyal, Vaibhav
Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-allmore » and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.« less
Development, Demonstration, and Control of a Testbed for Multiterminal HVDC System
Li, Yalong; Shi, Xiaojie M.; Liu, Bo; ...
2016-10-21
This paper presents the development of a scaled four-terminal high-voltage direct current (HVDC) testbed, including hardware structure, communication architecture, and different control schemes. The developed testbed is capable of emulating typical operation scenarios including system start-up, power variation, line contingency, and converter station failure. Some unique scenarios are also developed and demonstrated, such as online control mode transition and station re-commission. In particular, a dc line current control is proposed, through the regulation of a converter station at one terminal. By controlling a dc line current to zero, the transmission line can be opened by using relatively low-cost HVDC disconnectsmore » with low current interrupting capability, instead of the more expensive dc circuit breaker. Utilizing the dc line current control, an automatic line current limiting scheme is developed. As a result, when a dc line is overloaded, the line current control will be automatically activated to regulate current within the allowable maximum value.« less
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
JEFX 10 demonstration of Cooperative Hunter Killer UAS and upstream data fusion
NASA Astrophysics Data System (ADS)
Funk, Brian K.; Castelli, Jonathan C.; Watkins, Adam S.; McCubbin, Christopher B.; Marshall, Steven J.; Barton, Jeffrey D.; Newman, Andrew J.; Peterson, Cammy K.; DeSena, Jonathan T.; Dutrow, Daniel A.; Rodriguez, Pedro A.
2011-05-01
The Johns Hopkins University Applied Physics Laboratory deployed and demonstrated a prototype Cooperative Hunter Killer (CHK) Unmanned Aerial System (UAS) capability and a prototype Upstream Data Fusion (UDF) capability as participants in the Joint Expeditionary Force Experiment 2010 in April 2010. The CHK capability was deployed at the Nevada Test and Training Range to prosecute a convoy protection operational thread. It used mission-level autonomy (MLA) software applied to a networked swarm of three Raven hunter UAS and a Procerus Miracle surrogate killer UAS, all equipped with full motion video (FMV). The MLA software provides the capability for the hunter-killer swarm to autonomously search an area or road network, divide the search area, deconflict flight paths, and maintain line of sight communications with mobile ground stations. It also provides an interface for an operator to designate a threat and initiate automatic engagement of the target by the killer UAS. The UDF prototype was deployed at the Maritime Operations Center at Commander Second Fleet, Naval Station Norfolk to provide intelligence analysts and the ISR commander with a common fused track picture from the available FMV sources. It consisted of a video exploitation component that automatically detected moving objects, a multiple hypothesis tracker that fused all of the detection data to produce a common track picture, and a display and user interface component that visualized the common track picture along with appropriate geospatial information such as maps and terrain as well as target coordinates and the source video.
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project
NASA Astrophysics Data System (ADS)
Edwards, Mark
2008-04-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.
USSR Report, Kommunist, No. 13, September 1986.
1987-01-07
all-union) program for specialization of NPO and industrial enterprises and their scientific research institutes and design bureaus could play a major...machine tools with numerical programming (ChPU), processing centers, automatic machines and groups of automatic machines controlled by computers, and...automatic lines, computer- controlled groups of equipment, comprehensively automated shops and sections) is the most important feature of high technical
NASA Astrophysics Data System (ADS)
Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor
In Applbaum et al. (2010) it was described how the "SEP-Search" program works automat-ically, determining on the basis of on-line one-minute NM data the beginning of a great SEP event. The "SEP-Search" next uses one-minute data in order to check whether or not the observed increase reflects the beginning of a real great SEP event. If yes, the program "SEP-Research/Spectrum" automatically starts to work on line. We consider two variants: 1) quiet period (no change in cut-off rigidity), 2) disturbed period (characterized with possible changing of cut-off rigidity). We describe the method of determining the spectrum of SEP in the 1st vari-ant (for this we need data for at least two components with different coupling functions). For the 2nd variant we need data for at least three components with different coupling functions. We show that for these purposes one can use data of the total intensity and some different mul-tiplicities, but that it is better to use data from two or three NM with different cut-off rigidities. We describe in detail the algorithms of the program "SEP-Research/Spectrum." We show how this program worked on examples of some historical great SEP events. The work of NM on Mt. Hermon is supported by Israel (Tel Aviv University and ISA) -Italian (UNIRoma-Tre and IFSI-CNR) collaboration.
Mitigating energy loss on distribution lines through the allocation of reactors
NASA Astrophysics Data System (ADS)
Miranda, T. M.; Romero, F.; Meffe, A.; Castilho Neto, J.; Abe, L. F. T.; Corradi, F. E.
2018-03-01
This paper presents a methodology for automatic reactors allocation on medium voltage distribution lines to reduce energy loss. In Brazil, some feeders are distinguished by their long lengths and very low load, which results in a high influence of the capacitance of the line on the circuit’s performance, requiring compensation through the installation of reactors. The automatic allocation is accomplished using an optimization meta-heuristic called Global Neighbourhood Algorithm. Given a set of reactor models and a circuit, it outputs an optimal solution in terms of reduction of energy loss. The algorithm is also able to verify if the voltage limits determined by the user are not being violated, besides checking for energy quality. The methodology was implemented in a software tool, which can also show the allocation graphically. A simulation with four real feeders is presented in the paper. The obtained results were able to reduce the energy loss significantly, from 50.56%, in the worst case, to 93.10%, in the best case.
NASA Technical Reports Server (NTRS)
Meyer, G.; Cicolani, L.
1981-01-01
A practical method for the design of automatic flight control systems for aircraft with complex characteristics and operational requirements, such as the powered lift STOL and V/STOL configurations, is presented. The method is effective for a large class of dynamic systems requiring multi-axis control which have highly coupled nonlinearities, redundant controls, and complex multidimensional operational envelopes. It exploits the concept of inverse dynamic systems, and an algorithm for the construction of inverse is given. A hierarchic structure for the total control logic with inverses is presented. The method is illustrated with an application to the Augmentor Wing Jet STOL Research Aircraft equipped with a digital flight control system. Results of flight evaluation of the control concept on this aircraft are presented.
NASA Astrophysics Data System (ADS)
Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao
We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).
Toward Routine Automatic Pathway Discovery from On-line Scientific Text Abstracts.
Ng; Wong
1999-01-01
We are entering a new era of research where the latest scientific discoveries are often first reported online and are readily accessible by scientists worldwide. This rapid electronic dissemination of research breakthroughs has greatly accelerated the current pace in genomics and proteomics research. The race to the discovery of a gene or a drug has now become increasingly dependent on how quickly a scientist can scan through voluminous amount of information available online to construct the relevant picture (such as protein-protein interaction pathways) as it takes shape amongst the rapidly expanding pool of globally accessible biological data (e.g. GENBANK) and scientific literature (e.g. MEDLINE). We describe a prototype system for automatic pathway discovery from on-line text abstracts, combining technologies that (1) retrieve research abstracts from online sources, (2) extract relevant information from the free texts, and (3) present the extracted information graphically and intuitively. Our work demonstrates that this framework allows us to routinely scan online scientific literature for automatic discovery of knowledge, giving modern scientists the necessary competitive edge in managing the information explosion in this electronic age.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S.; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively. PMID:26225419
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts.
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively.
Selected Publications in Image Understanding and Computer Vision from 1974 to 1983
1985-04-18
12, 1980, 407-425. G.4. Three-Dimensional Analysis 654. T. Kanade, A theory of origami world, AI 13, 1980, 279-311. 655. R. M. Haralick, Using...the origami world, in [61, 454-456. 462. K. Sugihara, Automatic construction of junction dictionaries and their exploitation for the analysis of range
Hom, Kristin A; Woods, Stephanie J
2013-02-01
Commercial sexual exploitation of women and girls through forced prostitution and sex-trafficking is a human rights and public health issue, with survivors facing complex mental health problems from trauma and violence. An international and domestic problem, the average age of recruitment into sex-trafficking is between 11 and 14 years old. Given its secrecy and brutality, such exploitation remains difficult to study, which results in a lack of knowledge related to trauma and how best to develop specific services that effectively engage and meet the unique needs of survivors. This qualitative research, using thematic analysis, explored the stories of trauma and its aftermath for commercially sexually exploited women as told by front-line service providers. Three themes emerged regarding the experience of sex-trafficking and its outcomes-Pimp Enculturation, Aftermath, and Healing the Wound-along with seven subthemes. These have important implications for all service and healthcare providers.
Hu, Zhi-yu; Zhang, Lei; Ma, Wei-guang; Yan, Xiao-juan; Li, Zhi-xin; Zhang, Yong-zhi; Wang, Le; Dong, Lei; Yin, Wang-bao; Jia, Suo-tang
2012-03-01
Self-designed identifying software for LIBS spectral line was introduced. Being integrated with LabVIEW, the soft ware can smooth spectral lines and pick peaks. The second difference and threshold methods were employed. Characteristic spectrum of several elements matches the NIST database, and realizes automatic spectral line identification and qualitative analysis of the basic composition of sample. This software can analyze spectrum handily and rapidly. It will be a useful tool for LIBS.
Automatic target alignment of the Helios laser system
NASA Astrophysics Data System (ADS)
Liberman, I.; Viswanathan, V. K.; Klein, M.; Seery, B. D.
1980-05-01
An automatic target-alignment technique for the Helios laser facility is reported and verified experimentally. The desired alignment condition is completely described by an autocollimation test. A computer program examines the autocollimated return pattern from the surrogate target and correctly describes any changes required in mirror orientation to yield optimum target alignment with either aberrated or misaligned beams. Automated on-line target alignment is thus shown to be feasible.
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
1996-01-01
INTENSIFICATION (AI2) ATD AERIAL SCOUT SENSORS INTEGRATION (ASSI) BISTATIC RADAR FOR WEAPONS LOCATION (BRWL) ATD CLOSE IN MAN PORTABLE MINE DETECTOR (CIMMD...MS IV PE & LINE #: 1X428010.D107 HI Operations/Support DESCRIPTION: The AN/TTC-39A Circuit Switch is a 744 line mobile , automatic ...SYNOPSIS: AN/TTC-39 IS A MOBILE , AUTOMATIC , MODULAR ELECTRONIC CIRCUIT SWITCH UNDER PROCESSOR CONTROL WITH INTEGRAL COMSEC AND MULTIPLEX EQUIPMENT. AN/TTC
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab
NASA Astrophysics Data System (ADS)
Toscano, G. J.; Gopalam, U.; Devarajan, V.
2013-08-01
Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.
Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin
2008-09-01
We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.
Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David
2017-12-19
The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.
Automatic 3D power line reconstruction of multi-angular imaging power line inspection system
NASA Astrophysics Data System (ADS)
Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei
2007-06-01
We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.
Exploiting the systematic review protocol for classification of medical abstracts.
Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter
2011-01-01
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
D Central Line Extraction of Fossil Oyster Shells
NASA Astrophysics Data System (ADS)
Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.
2016-06-01
Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed sufficient for the selected paleontological application, namely shell age determination.
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
NASA Astrophysics Data System (ADS)
Rahman, Fuad; Tarnikova, Yuliya; Hartono, Rachmat; Alam, Hassan
2006-01-01
This paper presents a novel automatic web publishing solution, Pageview (R). PageView (R) is a complete working solution for document processing and management. The principal aim of this tool is to allow workgroups to share, access and publish documents on-line on a regular basis. For example, assuming that a person is working on some documents. The user will, in some fashion, organize his work either in his own local directory or in a shared network drive. Now extend that concept to a workgroup. Within a workgroup, some users are working together on some documents, and they are saving them in a directory structure somewhere on a document repository. The next stage of this reasoning is that a workgroup is working on some documents, and they want to publish them routinely on-line. Now it may happen that they are using different editing tools, different software, and different graphics tools. The resultant documents may be in PDF, Microsoft Office (R), HTML, or Word Perfect format, just to name a few. In general, this process needs the documents to be processed in a fashion so that they are in the HTML format, and then a web designer needs to work on that collection to make them available on-line. PageView (R) takes care of this whole process automatically, making the document workflow clean and easy to follow. PageView (R) Server publishes documents, complete with the directory structure, for online use. The documents are automatically converted to HTML and PDF so that users can view the content without downloading the original files, or having to download browser plug-ins. Once published, other users can access the documents as if they are accessing them from their local folders. The paper will describe the complete working system and will discuss possible applications within the document management research.
NASA Astrophysics Data System (ADS)
Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian
2001-08-01
Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.
The Lick-Gaertner automatic measuring system
NASA Technical Reports Server (NTRS)
Vasilevskis, S.; Popov, W. A.
1971-01-01
The Lick-Gaertner automatic equipment has been designed mainly for the measurement of stellar proper motions with reference to galaxies, and consists of two main components: the survey machine and the automatic measuring engine. The survey machine is used for initial inspection and selection of objects for subsequent measurement. Two plates, up to 17 x 17 inches each, are surveyed simultaneously by means of projection on a screen. The approximate positions of objects selected are measured by two optical screws: helical lines cut through an aluminum coating on glass cylinders. These approximate coordinates to a precision of the order of 0.03mm are transmitted to a card punch by encoders connected with the cylinders.
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages
Hoffmann, Thomas J.; Laird, Nan M.
2009-01-01
The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291
An Automatic Networking and Routing Algorithm for Mesh Network in PLC System
NASA Astrophysics Data System (ADS)
Liu, Xiaosheng; Liu, Hao; Liu, Jiasheng; Xu, Dianguo
2017-05-01
Power line communication (PLC) is considered to be one of the best communication technologies in smart grid. However, the topology of low voltage distribution network is complex, meanwhile power line channel has characteristics of time varying and attenuation, which lead to the unreliability of power line communication. In this paper, an automatic networking and routing algorithm is introduced which can be adapted to the "blind state" topology. The results of simulation and test show that the scheme is feasible, the routing overhead is small, and the load balance performance is good, which can achieve the establishment and maintenance of network quickly and effectively. The scheme is of great significance to improve the reliability of PLC.
On improving IED object detection by exploiting scene geometry using stereo processing
NASA Astrophysics Data System (ADS)
van de Wouw, Dennis W. J. M.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
Detecting changes in the environment with respect to an earlier data acquisition is important for several applications, such as finding Improvised Explosive Devices (IEDs). We explore and evaluate the benefit of depth sensing in the context of automatic change detection, where an existing monocular system is extended with a second camera in a fixed stereo setup. We then propose an alternative frame registration that exploits scene geometry, in particular the ground plane. Furthermore, change characterization is applied to localized depth maps to distinguish between 3D physical changes and shadows, which solves one of the main challenges of a monocular system. The proposed system is evaluated on real-world acquisitions, containing geo-tagged test objects of 18 18 9 cm up to a distance of 60 meters. The proposed extensions lead to a significant reduction of the false-alarm rate by a factor of 3, while simultaneously improving the detection score with 5%.
Human Activity Recognition in AAL Environments Using Random Projections.
Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin
2016-01-01
Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.
Human Activity Recognition in AAL Environments Using Random Projections
Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin
2016-01-01
Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392
Automatic Dictionary Expansion Using Non-parallel Corpora
NASA Astrophysics Data System (ADS)
Rapp, Reinhard; Zock, Michael
Automatically generating bilingual dictionaries from parallel, manually translated texts is a well established technique that works well in practice. However, parallel texts are a scarce resource. Therefore, it is desirable also to be able to generate dictionaries from pairs of comparable monolingual corpora. For most languages, such corpora are much easier to acquire, and often in considerably larger quantities. In this paper we present the implementation of an algorithm which exploits such corpora with good success. Based on the assumption that the co-occurrence patterns between different languages are related, it expands a small base lexicon. For improved performance, it also realizes a novel interlingua approach. That is, if corpora of more than two languages are available, the translations from one language to another can be determined not only directly, but also indirectly via a pivot language.
On the Automaticity of the Evaluative Priming Effect in the Valent/Non-Valent Categorization Task
Spruyt, Adriaan; Tibboel, Helen
2015-01-01
It has previously been argued (a) that automatic evaluative stimulus processing is critically dependent upon feature-specific attention allocation and (b) that evaluative priming effects can arise in the absence of dimensional overlap between the prime set and the response set. In line with both claims, research conducted at our lab revealed that the evaluative priming effect replicates in the valent/non-valent categorization task. This research was criticized, however, because non-automatic, strategic processes may have contributed to the emergence of this effect. We now report the results of a replication study in which the operation of non-automatic, strategic processes was controlled for. A clear-cut evaluative priming effect emerged, thus supporting initial claims concerning feature-specific attention allocation and dimensional overlap. PMID:25803444
On the automaticity of the evaluative priming effect in the valent/non-valent categorization task.
Spruyt, Adriaan; Tibboel, Helen
2015-01-01
It has previously been argued (a) that automatic evaluative stimulus processing is critically dependent upon feature-specific attention allocation and (b) that evaluative priming effects can arise in the absence of dimensional overlap between the prime set and the response set. In line with both claims, research conducted at our lab revealed that the evaluative priming effect replicates in the valent/non-valent categorization task. This research was criticized, however, because non-automatic, strategic processes may have contributed to the emergence of this effect. We now report the results of a replication study in which the operation of non-automatic, strategic processes was controlled for. A clear-cut evaluative priming effect emerged, thus supporting initial claims concerning feature-specific attention allocation and dimensional overlap.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.
2010-07-31
Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper developed a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detect ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis.« less
NASA Astrophysics Data System (ADS)
Tuohy, Eimear; Clerc, Sebastien; Politi, Eirini; Mangin, Antoine; Datcu, Mihai; Vignudelli, Stefano; Illuzzi, Diomede; Craciunescu, Vasile; Aspetsberger, Michael
2017-04-01
The Coastal Thematic Exploitation Platform (C-TEP) is an on-going European Space Agency (ESA) funded project to develop a web service dedicated to the observation of the coastal environment and to support coastal management and monitoring. For over 20 years ESA satellites have provided a wealth of environmental data. The availability of an ever increasing volume of environmental data from satellite remote sensing provides a unique opportunity for exploratory science and the development of coastal applications. However, the diversity and complexity of EO data available, the need for efficient data access, information extraction, data management and high spec processing tools pose major challenges to achieving its full potential in terms of Big Data exploitation. C-TEP will provide a new means to handle the technical challenges of the observation of coastal areas and contribute to improved understanding and decision-making with respect to coastal resources and environments. C-TEP will unlock coastal knowledge and innovation as a collaborative, virtual work environment providing access to a comprehensive database of coastal Earth Observation (EO) data, in-situ data, model data and the tools and processors necessary to fully exploit these vast and heterogeneous datasets. The cloud processing capabilities provided, allow users to perform heavy processing tasks through a user-friendly Graphical User Interface (GUI). A connection to the PEPS (Plateforme pour l'Exploitation des Produits Sentinel) archive will provide data from Sentinel missions 1, 2 and 3. Automatic comparison tools will be provided to exploit the in-situ datasets in synergy with EO data. In addition, users may develop, test and share their own advanced algorithms for the extraction of coastal information. Algorithm validation will be facilitated by the capabilities to compute statistics over long time-series. Finally, C-TEP subscription services will allow users to perform automatic monitoring of some key indicators (water quality, water level, vegetation stress) from Near Real Time data. To demonstrate the benefits of C-TEP, three pilot cases have been implemented, each addressing specific, and highly topical, coastal research needs. These applications include change detection in land and seabed cover, water quality monitoring and reporting, and a coastal altimetry processor. The pilot cases demonstrate the wide scope of C-TEP and how it may contribute to European projects and international coastal networks. In conclusion, CTEP aims to provide new services and tools which will revolutionise accessibility to EO datasets, support a multi-disciplinary research collaboration, and the provision of long-term data series and innovative services for the monitoring of coastal regions.
Automated exploitation of sky polarization imagery.
Sadjadi, Firooz A; Chun, Cornell S L
2018-03-10
We propose an automated method for detecting neutral points in the sunlit sky. Until now, detecting these singularities has been done manually. Results are presented that document the application of this method on a limited number of polarimetric images of the sky captured with a camera and rotating polarizer. The results are significant because a method for automatically detecting the neutral points may aid in the determination of the solar position when the sun is obscured and may have applications in meteorology and pollution detection and characterization.
Video enhancement workbench: an operational real-time video image processing system
NASA Astrophysics Data System (ADS)
Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.
1993-01-01
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
Real-Time Reconnaissance-A Systems Look At Advanced Technology
NASA Astrophysics Data System (ADS)
Lapp, Henry
1981-12-01
An important role for reconnaissance is the location and identification of targets in real time. Current technology has been compartmented into sensors, automatic target recognizers, data links, ground exploitation and finally dissemination. In the days of bring home film recce, this segmentation of functions was appropriate. With the current emphasis on real time decision making from outputs of high resolution sensors this thinking has to be re-analyzed. A total systems approach to data management must be employed using the constraints imposed by technology as well as the atmosphere, survivable flight profiles, and the human workload. This paper will analyze the target acquisition through exploitation tasks and discuss the current advanced development technology that are applicable. A philosophy of processing data to get information as early as possible in the data handling chain is examined in the context of ground exploitation and dissemination needs. Examples of how the various real time sensors (screeners and processors), jam resistant data links and near real time ground data handling systems fit into this scenario are discussed. Specific DoD programs will be used to illustrate the credibility of this integrated approach.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano
Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.
Text-line extraction in handwritten Chinese documents based on an energy minimization framework.
Koo, Hyung Il; Cho, Nam Ik
2012-03-01
Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.
ERIC Educational Resources Information Center
Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana
2007-01-01
This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…
Shining a New Light on Silicon PV Manufacturing - Continuum Magazine |
lines. Photo by Dennis Schroeder, NREL Shining a New Light on Silicon PV Manufacturing Groundbreaking system and can be automatically eliminated. Photo by Dennis Schroeder, NREL Tackling the Serious Issue of
Code of Federal Regulations, 2012 CFR
2012-07-01
...-line engines fails to meet emission standards? 1045.320 Section 1045.320 Protection of Environment... production-line engines fails to meet emission standards? (a) If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2010 CFR
2010-07-01
...-line engines fails to meet emission standards? 1045.320 Section 1045.320 Protection of Environment... production-line engines fails to meet emission standards? (a) If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2012 CFR
2012-07-01
...-line engines fails to meet emission standards? 1048.320 Section 1048.320 Protection of Environment...-line engines fails to meet emission standards? If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-line engines fails to meet emission standards? 1054.320 Section 1054.320 Protection of Environment... production-line engines fails to meet emission standards? (a) If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-line engines fails to meet emission standards? 1045.320 Section 1045.320 Protection of Environment... production-line engines fails to meet emission standards? (a) If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2010 CFR
2010-07-01
...-line engines fails to meet emission standards? 1048.320 Section 1048.320 Protection of Environment...-line engines fails to meet emission standards? If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-line engines fails to meet emission standards? 1048.320 Section 1048.320 Protection of Environment...-line engines fails to meet emission standards? If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
Code of Federal Regulations, 2012 CFR
2012-07-01
...-line engines fails to meet emission standards? 1054.320 Section 1054.320 Protection of Environment... production-line engines fails to meet emission standards? (a) If you have a production-line engine with final... conformity is automatically suspended for that failing engine. You must take the following actions before...
NASA Astrophysics Data System (ADS)
Kornilin, Dmitriy V.; Kudryavtsev, Ilya A.; McMillan, Alison J.; Osanlou, Ardeshir; Ratcliffe, Ian
2017-06-01
Modern hydraulic systems should be monitored on the regular basis. One of the most effective ways to address this task is utilizing in-line automatic particle counters (APC) built inside of the system. The measurement of particle concentration in hydraulic liquid by APC is crucial because increasing numbers of particles should mean functional problems. Existing automatic particle counters have significant limitation for the precise measurement of relatively low concentration of particle in aerospace systems or they are unable to measure higher concentration in industrial ones. Both issues can be addressed by implementation of the CMOS image sensor instead of single photodiode used in the most of APC. CMOS image sensor helps to overcome the problem of the errors in volume measurement caused by inequality of particle speed inside of tube. Correction is based on the determination of the particle position and parabolic velocity distribution profile. Proposed algorithms are also suitable for reducing the errors related to the particles matches in measurement volume. The results of simulation show that the accuracy increased up to 90 per cent and the resolution improved ten times more compared to the single photodiode sensor.
Unification of automatic target tracking and automatic target recognition
NASA Astrophysics Data System (ADS)
Schachter, Bruce J.
2014-06-01
The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.
Automatic microscopy for mitotic cell location.
NASA Technical Reports Server (NTRS)
Herron, J.; Ranshaw, R.; Castle, J.; Wald, N.
1972-01-01
Advances are reported in the development of an automatic microscope with which to locate hematologic or other cells in mitosis for subsequent chromosome analysis. The system under development is designed to perform the functions of: slide scanning to locate metaphase cells; conversion of images of selected cells into binary form; and on-line computer analysis of the digitized image for significant cytogenetic data. Cell detection criteria are evaluated using a test sample of 100 mitotic cells and 100 artifacts.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Automatic Clustering of Rolling Element Bearings Defects with Artificial Neural Network
NASA Astrophysics Data System (ADS)
Antonini, M.; Faglia, R.; Pedersoli, M.; Tiboni, M.
2006-06-01
The paper presents the optimization of a methodology for automatic clustering based on Artificial Neural Networks to detect the presence of defects in rolling bearings. The research activity was developed in co-operation with an Italian company which is expert in the production of water pumps for automotive use (Industrie Saleri Italo). The final goal of the work is to develop a system for the automatic control of the pumps, at the end of the production line. In this viewpoint, we are gradually considering the main elements of the water pump, which can cause malfunctioning. The first elements we have considered are the rolling bearing, a very critic component for the system. The experimental activity is based on the vibration measuring of rolling bearings opportunely damaged; vibration signals are in the second phase elaborated; the third and last phase is an automatic clustering. Different signal elaboration techniques are compared to optimize the methodology.
Furukawa, Makoto; Takagai, Yoshitaka
2016-10-04
Online solid-phase extraction (SPE) coupled with inductively coupled plasma mass spectrometry (ICPMS) is a useful tool in automatic sequential analysis. However, it cannot simultaneously quantify the analytical targets and their recovery percentages (R%) in one-shot samples. We propose a system that simultaneously acquires both data in a single sample injection. The main flowline of the online solid-phase extraction is divided into main and split flows. The split flow line (i.e., bypass line), which circumvents the SPE column, was placed on the main flow line. Under program-controlled switching of the automatic valve, the ICPMS sequentially measures the targets in a sample before and after column preconcentration and determines the target concentrations and the R% on the SPE column. This paper describes the system development and two demonstrations to exhibit the analytical significance, i.e., the ultratrace amounts of radioactive strontium ( 90 Sr) using commercial Sr-trap resin and multielement adsorbability on the SPE column. This system is applicable to other flow analyses and detectors in online solid phase extraction.
High-Temperature-Superconductor Films In Microwave Circuits
NASA Technical Reports Server (NTRS)
Bhasin, K. B.; Warner, J. D.; Romanofsky, R. R.; Heinen, V. O.; Chorey, C. M.
1993-01-01
Report discusses recent developments in continuing research on fabrication and characterization of thin films of high-temperature superconducting material and incorporation of such films into microwave circuits. Research motivated by prospect of exploiting superconductivity to reduce electrical losses and thereby enhancing performance of such critical microwave components as ring resonators, filters, transmission lines, phase shifters, and feed lines in phased-array antennas.
NASA Astrophysics Data System (ADS)
Fujiwara, Yukihiro; Yoshii, Masakazu; Arai, Yasuhito; Adachi, Shuichi
Advanced safety vehicle(ASV)assists drivers’ manipulation to avoid trafic accidents. A variety of researches on automatic driving systems are necessary as an element of ASV. Among them, we focus on visual feedback approach in which the automatic driving system is realized by recognizing road trajectory using image information. The purpose of this paper is to examine the validity of this approach by experiments using a radio-controlled car. First, a practical image processing algorithm to recognize white lines on the road is proposed. Second, a model of the radio-controlled car is built by system identication experiments. Third, an automatic steering control system is designed based on H∞ control theory. Finally, the effectiveness of the designed control system is examined via traveling experiments.
Roadway data representation and application development : final report, December 2009.
DOT National Transportation Integrated Search
2009-08-06
The Straight-line Diagrammer, a web-based application to produce Straight-line Diagrams (SLDs) automatically, was developed in this project to replace old application (AutoSLD) which has outdated structure and limited capabilities.
NASA Astrophysics Data System (ADS)
Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan
2014-08-01
With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.
Automatic Feature Extraction System.
1982-12-01
exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Classifying EEG for Brain-Computer Interface: Learning Optimal Filters for Dynamical System Features
Song, Le; Epps, Julien
2007-01-01
Classification of multichannel EEG recordings during motor imagination has been exploited successfully for brain-computer interfaces (BCI). In this paper, we consider EEG signals as the outputs of a networked dynamical system (the cortex), and exploit synchronization features from the dynamical system for classification. Herein, we also propose a new framework for learning optimal filters automatically from the data, by employing a Fisher ratio criterion. Experimental evaluations comparing the proposed dynamical system features with the CSP and the AR features reveal their competitive performance during classification. Results also show the benefits of employing the spatial and the temporal filters optimized using the proposed learning approach. PMID:18364986
Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre
2018-04-01
Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
Periodic, On-Demand, and User-Specified Information Reconciliation
NASA Technical Reports Server (NTRS)
Kolano, Paul
2007-01-01
Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.
Automatic patient dose registry and clinical audit on line for mammography.
Ten, J I; Vano, E; Sánchez, R; Fernandez-Soto, J M
2015-07-01
The use of automatic registry systems for patient dose in digital mammography allows clinical audit and patient dose analysis of the whole sample of individual mammography exposures while fulfilling the requirements of the European Directives and other international recommendations. Further parameters associated with radiation exposure (tube voltage, X-ray tube output and HVL values for different kVp and target/filter combinations, breast compression, etc.) should be periodically verified and used to evaluate patient doses. This study presents an experience in routine clinical practice for mammography using automatic systems. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
40 CFR 280.44 - Methods of release detection for piping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... accordance with the following: (a) Automatic line leak detectors. Methods which alert the operator to the... pounds per square inch line pressure within 1 hour. An annual test of the operation of the leak detector...
40 CFR 280.44 - Methods of release detection for piping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with the following: (a) Automatic line leak detectors. Methods which alert the operator to the... pounds per square inch line pressure within 1 hour. An annual test of the operation of the leak detector...
40 CFR 280.44 - Methods of release detection for piping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... accordance with the following: (a) Automatic line leak detectors. Methods which alert the operator to the... pounds per square inch line pressure within 1 hour. An annual test of the operation of the leak detector...
40 CFR 280.44 - Methods of release detection for piping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... accordance with the following: (a) Automatic line leak detectors. Methods which alert the operator to the... pounds per square inch line pressure within 1 hour. An annual test of the operation of the leak detector...
40 CFR 280.44 - Methods of release detection for piping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accordance with the following: (a) Automatic line leak detectors. Methods which alert the operator to the... pounds per square inch line pressure within 1 hour. An annual test of the operation of the leak detector...
Crackscope : automatic pavement cracking inspection system.
DOT National Transportation Integrated Search
2008-08-01
The CrackScope system is an automated pavement crack rating system consisting of a : digital line scan camera, laser-line illuminator, and proprietary crack detection and classification : software. CrackScope is able to perform real-time pavement ins...
AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM
NASA Technical Reports Server (NTRS)
Schroer, B. J.
1994-01-01
The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.
NASA Astrophysics Data System (ADS)
Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.
2012-03-01
Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.
Takamuku, Shinya; Gomi, Hiroaki
2015-01-01
How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the ‘inverse’ computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. PMID:26156766
Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images
NASA Astrophysics Data System (ADS)
Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana
2015-03-01
Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.
Automated three-dimensional quantification of myocardial perfusion and brain SPECT.
Slomka, P J; Radau, P; Hurwitz, G A; Dey, D
2001-01-01
To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.
Popular song and lyrics synchronization and its application to music information retrieval
NASA Astrophysics Data System (ADS)
Chen, Kai; Gao, Sheng; Zhu, Yongwei; Sun, Qibin
2006-01-01
An automatic synchronization system of the popular song and its lyrics is presented in the paper. The system includes two main components: a) automatically detecting vocal/non-vocal in the audio signal and b) automatically aligning the acoustic signal of the song with its lyric using speech recognition techniques and positioning the boundaries of the lyrics in its acoustic realization at the multiple levels simultaneously (e.g. the word / syllable level and phrase level). The GMM models and a set of HMM-based acoustic model units are carefully designed and trained for the detection and alignment. To eliminate the severe mismatch due to the diversity of musical signal and sparse training data available, the unsupervised adaptation technique such as maximum likelihood linear regression (MLLR) is exploited for tailoring the models to the real environment, which improves robustness of the synchronization system. To further reduce the effect of the missed non-vocal music on alignment, a novel grammar net is build to direct the alignment. As we know, this is the first automatic synchronization system only based on the low-level acoustic feature such as MFCC. We evaluate the system on a Chinese song dataset collecting from 3 popular singers. We obtain 76.1% for the boundary accuracy at the syllable level (BAS) and 81.5% for the boundary accuracy at the phrase level (BAP) using fully automatic vocal/non-vocal detection and alignment. The synchronization system has many applications such as multi-modality (audio and textual) content-based popular song browsing and retrieval. Through the study, we would like to open up the discussion of some challenging problems when developing a robust synchronization system for largescale database.
Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen
2017-01-01
Simple Summary Most prototypes of systems to automatically detect lameness in dairy cattle are still not available on the market. Estimating their potential adoption rate could support developers in defining development goals towards commercially viable and well-adopted systems. We simulated the potential market shares of such prototypes to assess the effect of altering the system cost and detection performance on the potential adoption rate. We found that system cost and lameness detection performance indeed substantially influence the potential adoption rate. In order for farmers to prefer automatic detection over current visual detection, the usefulness that farmers attach to a system with specific characteristics should be higher than that of visual detection. As such, we concluded that low system costs and high detection performances are required before automatic lameness detection systems become applicable in practice. Abstract Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system’s potential adoption rate. PMID:28991188
Fedr, Radek; Pernicová, Zuzana; Slabáková, Eva; Straková, Nicol; Bouchal, Jan; Grepl, Michal; Kozubík, Alois; Souček, Karel
2013-05-01
The clonogenic assay is a well-established in vitro method for testing the survival and proliferative capability of cells. It can be used to determine the cytotoxic effects of various treatments including chemotherapeutics and ionizing radiation. However, this approach can also characterize cells with different phenotypes and biological properties, such as stem cells or cancer stem cells. In this study, we implemented a faster and more precise method for assessing the cloning efficiency of cancer stem-like cells that were characterized and separated using a high-speed cell sorter. Cell plating onto a microplate using an automatic cell deposition unit was performed in a single-cell or dilution rank mode by the fluorescence-activated cell sorting method. We tested the new automatic cell-cloning assay (ACCA) on selected cancer cell lines and compared it with the manual approach. The obtained results were also compared with the results of the limiting dilution assay for different cell lines. We applied the ACCA to analyze the cloning capacity of different subpopulations of prostate and colon cancer cells based on the expression of the characteristic markers of stem (CD44 and CD133) and cancer stem cells (TROP-2, CD49f, and CD44). Our results revealed that the novel ACCA is a straightforward approach for determining the clonogenic capacity of cancer stem-like cells identified in both cell lines and patient samples. Copyright © 2013 International Society for Advancement of Cytometry.
Six-Inch Shock Tube Characterization
2016-12-09
Change of Address Organizations receiving reports from the U.S. Army Aeromedical Research Laboratory on automatic mailing lists should confirm...92A Figure 2 summarizes the peak levels for shots using 92A Mylar® as a membrane with a linear trend line overlaid on the data, which produced the...peak levels for shots using 500A Mylar® as a membrane with a 6th-order polynomial trend line overlaid on the data, which produced the highest R2 value
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
Extraction of sandy bedforms features through geodesic morphometry
NASA Astrophysics Data System (ADS)
Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry
2016-09-01
State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.
Automatic recloser circuit breaker integrated with GSM technology for power system notification
NASA Astrophysics Data System (ADS)
Lada, M. Y.; Khiar, M. S. A.; Ghani, S. A.; Nawawi, M. R. M.; Rahim, N. H.; Sinar, L. O. M.
2015-05-01
Lightning is one type of transient faults that usually cause the circuit breaker in the distribution board trip due to overload current detection. The instant tripping condition in the circuit breakers clears the fault in the system. Unfortunately most circuit breakers system is manually operated. The power line will be effectively re-energized after the clearing fault process is finished. Auto-reclose circuit is used on the transmission line to carry out the duty of supplying quality electrical power to customers. In this project, an automatic reclose circuit breaker for low voltage usage is designed. The product description is the Auto Reclose Circuit Breaker (ARCB) will trip if the current sensor detects high current which exceeds the rated current for the miniature circuit breaker (MCB) used. Then the fault condition will be cleared automatically and return the power line to normal condition. The Global System for Mobile Communication (GSM) system will send SMS to the person in charge if the tripping occurs. If the over current occurs in three times, the system will fully trip (open circuit) and at the same time will send an SMS to the person in charge. In this project a 1 A is set as the rated current and any current exceeding a 1 A will cause the system to trip or interrupted. This system also provides an additional notification for user such as the emergency light and warning system.
Vision-based in-line fabric defect detection using yarn-specific shape features
NASA Astrophysics Data System (ADS)
Schneider, Dorian; Aach, Til
2012-01-01
We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.
7. FOURTH FLOOR, DETAIL OF HOTEL SOAP LINE TO WEST: ...
7. FOURTH FLOOR, DETAIL OF HOTEL SOAP LINE TO WEST: FERGUSON & HAAS AUTOMATIC WRAPPING MACHINE INSTALLED BY 1929 - Colgate & Company Jersey City Plant, Building No. B-15, 90-96 Greene Street, Jersey City, Hudson County, NJ
Schuurmann, R C L; Kuster, L; Slump, C H; Vahl, A; van den Heuvel, D A F; Ouriel, K; de Vries, J-P P M
2016-02-01
Supra- and infrarenal aortic neck angulation have been associated with complications after endovascular aortic aneurysm repair. However, a uniform angulation measurement method is lacking and the concept of angulation suggests a triangular oversimplification of the aortic anatomy. (Semi-)automated calculation of curvature along the center luminal line describes the actual trajectory of the aorta. This study proposes a methodology for calculating aortic (neck) curvature and suggests an additional method based on available tools in current workstations: curvature by digital calipers (CDC). Proprietary custom software was developed for automatic calculation of the severity and location of the largest supra- and infrarenal curvature over the center luminal line. Twenty-four patients with severe supra- or infrarenal angulations (≥45°) and 11 patients with small to moderate angulations (<45°) were included. Both CDC and angulation were measured by two independent observers on the pre- and postoperative computed tomographic angiography scans. The relationships between actual curvature and CDC and angulation were visualized and tested with Pearson's correlation coefficient. The CDC was also fully automatically calculated with proprietary custom software. The difference between manual and automatic determination of CDC was tested with a paired Student t test. A p-value was considered significant when two-tailed α < .05. The correlation between actual curvature and manual CDC is strong (.586-.962) and even stronger for automatic CDC (.865-.961). The correlation between actual curvature and angulation is much lower (.410-.737). Flow direction angulation values overestimate CDC measurements by 60%, with larger variance. No significant difference was found in automatically calculated CDC values and manually measured CDC values. Curvature calculation of the aortic neck improves determination of the true aortic trajectory. Automatic calculation of the actual curvature is preferable, but measurement or calculation of the curvature by digital calipers is a valid alternative if actual curvature is not at hand. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Automation of motor dexterity assessment.
Heyer, Patrick; Castrejon, Luis R; Orihuela-Espina, Felipe; Sucar, Luis Enrique
2017-07-01
Motor dexterity assessment is regularly performed in rehabilitation wards to establish patient status and automatization for such routinary task is sought. A system for automatizing the assessment of motor dexterity based on the Fugl-Meyer scale and with loose restrictions on sensing technologies is presented. The system consists of two main elements: 1) A data representation that abstracts the low level information obtained from a variety of sensors, into a highly separable low dimensionality encoding employing t-distributed Stochastic Neighbourhood Embedding, and, 2) central to this communication, a multi-label classifier that boosts classification rates by exploiting the fact that the classes corresponding to the individual exercises are naturally organized as a network. Depending on the targeted therapeutic movement class labels i.e. exercises scores, are highly correlated-patients who perform well in one, tends to perform well in related exercises-; and critically no node can be used as proxy of others - an exercise does not encode the information of other exercises. Over data from a cohort of 20 patients, the novel classifier outperforms classical Naive Bayes, random forest and variants of support vector machines (ANOVA: p < 0.001). The novel multi-label classification strategy fulfills an automatic system for motor dexterity assessment, with implications for lessening therapist's workloads, reducing healthcare costs and providing support for home-based virtual rehabilitation and telerehabilitation alternatives.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Observing control and data reduction at the UKIRT
NASA Astrophysics Data System (ADS)
Bridger, Alan; Economou, Frossie; Wright, Gillian S.; Currie, Malcolm J.
1998-07-01
For the past seven years observing with the major instruments at the United Kingdom IR Telescope (UKIRT) has been semi-automated, using ASCII files top configure the instruments and then sequence a series of exposures and telescope movements to acquire the data. For one instrument automatic data reduction completes the cycle. The emergence of recent software technologies has suggested an evolution of this successful system to provide a friendlier and more powerful interface to observing at UKIRT. The Observatory Reduction and Acquisition Control (ORAC) project is now underway to construct this system. A key aim of ORAC is to allow a more complete description of the observing program, including the target sources and the recipe that will be used to provide on-line data reduction. Remote observation preparation and submission will also be supported. In parallel the observatory control system will be upgraded to use these descriptions for more automatic observing, while retaining the 'classical' interactive observing mode. The final component of the project is an improved automatic data reduction system, allowing on-line reduction of data at the telescope while retaining the flexibility to cope with changing observing techniques and instruments. The user will also automatically be provided with the scripts used for the real-time reduction to help provide post-observing data reduction support. The overall project goal is to improve the scientific productivity of the telescope, but it should also reduce the overall ongoing support requirements, and has the eventual goal of supporting the use of queue- scheduled observing.
Automatic Overset Grid Generation with Heuristic Feedback Control
NASA Technical Reports Server (NTRS)
Robinson, Peter I.
2001-01-01
An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.
Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA
Djebali, Sarah; Delaplace, Franck; Crollius, Hugues Roest
2006-01-01
Background Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism. Results We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts. Conclusion We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement. PMID:16925841
NASA Astrophysics Data System (ADS)
Shi, Fei; Liu, Yu-Yan; Sun, Guang-Lan; Li, Pei-Yu; Lei, Yu-Ming; Wang, Jian
2015-10-01
The emission-lines of galaxies originate from massive young stars or supermassive blackholes. As a result, spectral classification of emission-line galaxies into star-forming galaxies, active galactic nucleus (AGN) hosts, or compositions of both relates closely to formation and evolution of galaxy. To find efficient and automatic spectral classification method, especially in large surveys and huge data bases, a support vector machine (SVM) supervised learning algorithm is applied to a sample of emission-line galaxies from the Sloan Digital Sky Survey (SDSS) data release 9 (DR9) provided by the Max Planck Institute and the Johns Hopkins University (MPA/JHU). A two-step approach is adopted. (i) The SVM must be trained with a subset of objects that are known to be AGN hosts, composites or star-forming galaxies, treating the strong emission-line flux measurements as input feature vectors in an n-dimensional space, where n is the number of strong emission-line flux ratios. (ii) After training on a sample of emission-line galaxies, the remaining galaxies are automatically classified. In the classification process, we use a 10-fold cross-validation technique. We show that the classification diagrams based on the [N II]/Hα versus other emission-line ratio, such as [O III]/Hβ, [Ne III]/[O II], ([O III]λ4959+[O III]λ5007)/[O III]λ4363, [O II]/Hβ, [Ar III]/[O III], [S II]/Hα, and [O I]/Hα, plus colour, allows us to separate unambiguously AGN hosts, composites or star-forming galaxies. Among them, the diagram of [N II]/Hα versus [O III]/Hβ achieved an accuracy of 99 per cent to separate the three classes of objects. The other diagrams above give an accuracy of ˜91 per cent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.
2010-02-28
Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less
Automatic comic page image understanding based on edge segment analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai
2013-12-01
Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.
Real-time automatic fiducial marker tracking in low contrast cine-MV images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang
2013-01-15
Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less
Code of Federal Regulations, 2012 CFR
2012-07-01
...-line engines fails to meet emission standards? 1042.320 Section 1042.320 Protection of Environment... if one of my production-line engines fails to meet emission standards? (a) If you have a production....315(a)), the certificate of conformity is automatically suspended for that failing engine. You must...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-line engines fails to meet emission standards? 1042.320 Section 1042.320 Protection of Environment... if one of my production-line engines fails to meet emission standards? (a) If you have a production....315(a)), the certificate of conformity is automatically suspended for that failing engine. You must...
Code of Federal Regulations, 2010 CFR
2010-07-01
...-line engines fails to meet emission standards? 1042.320 Section 1042.320 Protection of Environment... if one of my production-line engines fails to meet emission standards? (a) If you have a production....315(a)), the certificate of conformity is automatically suspended for that failing engine. You must...
Extending compile-time reverse mode and exploiting partial separability in ADIFOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.
Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar
2016-11-01
Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.
Automated quantification of the synchrogram by recurrence plot analysis.
Nguyen, Chinh Duc; Wilson, Stephen James; Crozier, Stuart
2012-04-01
Recently, the concept of phase synchronization of two weakly coupled oscillators has raised a great research interest and has been applied to characterize synchronization phenomenon in physiological data. Phase synchronization of cardiorespiratory coupling is often studied by a synchrogram analysis, a graphical tool investigating the relationship between instantaneous phases of two signals. Although several techniques have been proposed to automatically quantify the synchrogram, most of them require a preselection of a phase-locking ratio by trial and error. One technique does not require this information; however, it is based on the power spectrum of phase's distribution in the synchrogram, which is vulnerable to noise. This study aims to introduce a new technique to automatically quantify the synchrogram by studying its dynamic structure. Our technique exploits recurrence plot analysis, which is a well-established tool for characterizing recurring patterns and nonstationarities in experiments. We applied our technique to detect synchronization in simulated and measured infants' cardiorespiratory data. Our results suggest that the proposed technique is able to systematically detect synchronization in noisy and chaotic data without preselecting the phase-locking ratio. By embedding phase information of the synchrogram into phase space, the phase-locking ratio is automatically unveiled as the number of attractors.
Measuring Thicknesses of Coatings on Metals
NASA Technical Reports Server (NTRS)
Cotty, Glenn M., Jr.
1986-01-01
Digital light sensor and eddy-current sensor measure thickness without contact. Surface of Coating reflects laser beam to optical sensor. Position of reflected spot on sensor used by microcomputer to calculate coating thickness. Eddy-current sensor maintains constant distance between optical sensor and metal substrate. When capabilities of available components fully exploited, instrument measures coatings from 0.001 to 6 in. (0.0025 to 15 cm) thick with accuracy of 1 part in 4,000. Instrument readily incorporated in automatic production and inspection systems. Used to inspect thermal-insulation layers, paint, and protective coatings. Also used to control application of coatings to preset thicknesses.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Exploiting semantics for sensor re-calibration in event detection systems
NASA Astrophysics Data System (ADS)
Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini
2008-01-01
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar
2018-05-04
The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
NASA Astrophysics Data System (ADS)
Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.
2017-10-01
An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a semi-automatic workflow facilitating the introduction of an MR-only workflow.
Casiraghi, Elena; Cossa, Mara; Huber, Veronica; Rivoltini, Licia; Tozzi, Matteo; Villa, Antonello; Vergani, Barbara
2017-11-02
In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
Definition and automatic anatomy recognition of lymph node zones in the pelvis on CT images
NASA Astrophysics Data System (ADS)
Liu, Yu; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Guo, Shuxu; Attor, Rosemary; Reinicke, Danica; Torigian, Drew A.
2016-03-01
Currently, unlike IALSC-defined thoracic lymph node zones, no explicitly provided definitions for lymph nodes in other body regions are available. Yet, definitions are critical for standardizing the recognition, delineation, quantification, and reporting of lymphadenopathy in other body regions. Continuing from our previous work in the thorax, this paper proposes a standardized definition of the grouping of pelvic lymph nodes into 10 zones. We subsequently employ our earlier Automatic Anatomy Recognition (AAR) framework designed for body-wide organ modeling, recognition, and delineation to actually implement these zonal definitions where the zones are treated as anatomic objects. First, all 10 zones and key anatomic organs used as anchors are manually delineated under expert supervision for constructing fuzzy anatomy models of the assembly of organs together with the zones. Then, optimal hierarchical arrangement of these objects is constructed for the purpose of achieving the best zonal recognition. For actual localization of the objects, two strategies are used -- optimal thresholded search for organs and one-shot method for the zones where the known relationship of the zones to key organs is exploited. Based on 50 computed tomography (CT) image data sets for the pelvic body region and an equal division into training and test subsets, automatic zonal localization within 1-3 voxels is achieved.
A simulator evaluation of an automatic terminal approach system
NASA Technical Reports Server (NTRS)
Hinton, D. A.
1983-01-01
The automatic terminal approach system (ATAS) is a concept for improving the pilot/machine interface with cockpit automation. The ATAS can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane avionics, control the airplane's autopilot, and display status information to the pilot. A piloted simulation study was conducted to determine the feasibility of an ATAS, determine pilot acceptance, and examine pilot/ATAS interaction. Seven instrument-rated pilots each flew four instrument approaches with a base-line heading select autopilot mode. The ATAS runs resulted in lower flight technical error, lower pilot workload, and fewer blunders than with the baseline autopilot. The ATAS status display enabled the pilots to maintain situational awareness during the automatic approaches. The system was well accepted by the pilots.
Assessing the performance of a covert automatic target recognition algorithm
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2005-05-01
Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button
2010-01-01
Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979
Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan
2015-01-01
The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-29
... underground storage tank (UST) facilities; failure to perform annual tests of automatic line leak detectors... detectors for piping on the UST systems. CHEVRON also agrees under the Consent Decree to install dispenser...
Real-time control of focused ultrasound heating based on rapid MR thermometry.
Vimeux, F C; De Zwart, J A; Palussiére, J; Fawaz, R; Delalande, C; Canioni, P; Grenier, N; Moonen, C T
1999-03-01
Real-time control of the heating procedure is essential for hyperthermia applications of focused ultrasound (FUS). The objective of this study is to demonstrate the feasibility of MRI-controlled FUS. An automatic control system was developed using a dedicated interface between the MR system control computer and the FUS wave generator. Two algorithms were used to regulate FUS power to maintain the focal point temperature at a desired level. Automatic control of FUS power level was demonstrated ex vivo at three target temperature levels (increase of 5 degrees C, 10 degrees C, and 30 degrees C above room temperature) during 30-minute hyperthermic periods. Preliminary in vivo results on rat leg muscle confirm that necrosis estimate, calculated on-line during FUS sonication, allows prediction of tissue damage. CONCLUSIONS. The feasibility of fully automatic FUS control based on MRI thermometry has been demonstrated.
Decomposition of Multi-player Games
NASA Astrophysics Data System (ADS)
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
Exploiting the Automatic Dependent Surveillance-Broadcast System via False Target Injection
2012-03-01
THESIS Domenic Magazu III, Captain, USAF AFIT/GCO/ENG/12-07 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY...Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air...of GNU Radio, a Universal Software Radio Peripheral (USRP), and software developed by the author. The ability to generate, transmit, and insert
Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
2007-05-30
with large region of attraction about the true minimum. The physical optics models provide features for high confidence identification of stationary...the detection test are used to estimate 3D object scattering; multiple images can be noncoherently combined to reconstruct a more complete object...Proc. SPIE Algorithms for Synthetic Aper- ture Radar Imagery XIII, The International Society for Optical Engineering, April 2006. [40] K. Varshney, M. C
Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans.
Reda, Fitsum A; Noble, Jack H; Rivas, Alejandro; McRackan, Theodore R; Labadie, Robert F; Dawant, Benoit M
2011-10-01
Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.
Smith, Peter K; Fox, Adam T; Davies, Patrick; Hamidi-Manesh, Laila
2006-01-01
Over 110 million Americans have accessed the internet for healthcare information. This information is often used in medical consultations and possibly contributing to increasing health care burdens. Although well informed patients are an advantage in disease management, the quality and reliability of information online is variable. Adolescents today have grown up with the Internet as a primary knowledge source, but still lack the skills to effectively filter credible information. Voluntary standards on health information have been attempted by Health on the Net Foundation; however their success has been limited, mainly due to the fact that most people searching for health information on line use a search engine rather than a specific site. This makes regulations almost impossible. Cyberchondriacs, a term used to describe anyone who seeks health-related information on the Internet, are not only at risk of acquiring unreliable information on line and therefore potential unnecessary anxiety, but they could also be financially exploited, for example, by e-health organisations and pharmaceutical companies. Moreover, the vulnerable e-health seeker, such as the inexperienced adolescent, is able to buy any quantity of nearly any medication on-line. Reasons for patients seeking information on line are varied, but include not having enough time at consultations. To try to address these issues, organisations such as NHS direct have been set up but their success is difficult to measure due to a lack of data. However, potential exploitation of a vulnerable population and the motivations behind their search for information on the internet merit further study.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-line vehicles or engines fails to meet emission standards? 1051.320 Section 1051.320 Protection of... of my production-line vehicles or engines fails to meet emission standards? (a) If you have a... standards (see § 1051.315(a)), the certificate of conformity is automatically suspended for that failing...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-line vehicles or engines fails to meet emission standards? 1051.320 Section 1051.320 Protection of... of my production-line vehicles or engines fails to meet emission standards? (a) If you have a... standards (see § 1051.315(a)), the certificate of conformity is automatically suspended for that failing...
Tracking multiple surgical instruments in a near-infrared optical system.
Cai, Ken; Yang, Rongqian; Lin, Qinyong; Wang, Zhigang
2016-12-01
Surgical navigation systems can assist doctors in performing more precise and more efficient surgical procedures to avoid various accidents. The near-infrared optical system (NOS) is an important component of surgical navigation systems. However, several surgical instruments are used during surgery, and effectively tracking all of them is challenging. A stereo matching algorithm using two intersecting lines and surgical instrument codes is proposed in this paper. In our NOS, the markers on the surgical instruments can be captured by two near-infrared cameras. After automatically searching and extracting their subpixel coordinates in the left and right images, the coordinates of the real and pseudo markers are determined by the two intersecting lines. Finally, the pseudo markers are removed to achieve accurate stereo matching by summing the codes for the distances between a specific marker with the other two markers on the surgical instrument. Experimental results show that the markers on the different surgical instruments can be automatically and accurately recognized. The NOS can accurately track multiple surgical instruments.
A dedicated on-line detecting system for auto air dryers
NASA Astrophysics Data System (ADS)
Shi, Chao-yu; Luo, Zai
2013-10-01
According to the correlative automobile industry standard and the requirements of manufacturer, this dedicated on-line detecting system is designed against the shortage of low degree automatic efficiency and detection precision of auto air dryer in the domestic. Fast automatic detection is achieved by combining the technology of computer control, mechatronics and pneumatics. This system can detect the speciality performance of pressure regulating valve and sealability of auto air dryer, in which online analytical processing of test data is available, at the same time, saving and inquiring data is achieved. Through some experimental analysis, it is indicated that efficient and accurate detection of the performance of auto air dryer is realized, and the test errors are less than 3%. Moreover, we carry out the type A evaluation of uncertainty in test data based on Bayesian theory, and the results show that the test uncertainties of all performance parameters are less than 0.5kPa, which can meet the requirements of operating industrial site absolutely.
Acquisition-Management Program
NASA Technical Reports Server (NTRS)
Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.
1987-01-01
NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.
A large number of stepping motor network construction by PLC
NASA Astrophysics Data System (ADS)
Mei, Lin; Zhang, Kai; Hongqiang, Guo
2017-11-01
In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.
NMR reaction monitoring in flow synthesis
Gomez, M Victoria
2017-01-01
Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed. PMID:28326137
NMR reaction monitoring in flow synthesis.
Gomez, M Victoria; de la Hoz, Antonio
2017-01-01
Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed.
Segmenting overlapping nano-objects in atomic force microscopy image
NASA Astrophysics Data System (ADS)
Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko
2018-01-01
Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.
Development of an Automatic Dispensing System for Traditional Chinese Herbs.
Lin, Chi-Ying; Hsieh, Ping-Jung
2017-01-01
The gathering of ingredients for decoctions of traditional Chinese herbs still relies on manual dispensation, due to the irregular shape of many items and inconsistencies in weights. In this study, we developed an automatic dispensing system for Chinese herbal decoctions with the aim of reducing manpower costs and the risk of mistakes. We employed machine vision in conjunction with a robot manipulator to facilitate the grasping of ingredients. The name and formulation of the decoction are input via a human-computer interface, and the dispensing of multiple medicine packets is performed automatically. An off-line least-squared curve fitting method was used to calculate the amount of material grasped by the claws and thereby improve system efficiency as well as the accuracy of individual dosages. Experiments on the dispensing of actual ingredients demonstrate the feasibility of the proposed system.
MARZ: Manual and automatic redshifting software
NASA Astrophysics Data System (ADS)
Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.
2016-04-01
The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.
Automated eye blink detection and correction method for clinical MR eye imaging.
Wezel, Joep; Garpebring, Anders; Webb, Andrew G; van Osch, Matthias J P; Beenakker, Jan-Willem M
2017-07-01
To implement an on-line monitoring system to detect eye blinks during ocular MRI using field probes, and to reacquire corrupted k-space lines by means of an automatic feedback system integrated with the MR scanner. Six healthy subjects were scanned on a 7 Tesla MRI whole-body system using a custom-built receive coil. Subjects were asked to blink multiple times during the MR-scan. The local magnetic field changes were detected with an external fluorine-based field probe which was positioned close to the eye. The eye blink produces a field shift greater than a threshold level, this was communicated in real-time to the MR system which immediately reacquired the motion-corrupted k-space lines. The uncorrected images, using the original motion-corrupted data, showed severe artifacts, whereas the corrected images, using the reacquired data, provided an image quality similar to images acquired without blinks. Field probes can successfully detect eye blinks during MRI scans. By automatically reacquiring the eye blink-corrupted data, high quality MR-images of the eye can be acquired. Magn Reson Med 78:165-171, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002
Safeguarding End-User Military Software
2014-12-04
product lines using composi- tional symbolic execution [17] Software product lines are families of products defined by feature commonality and vari...ability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse...feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically
Megherbi, Hakima; Elbro, Carsten; Oakhill, Jane; Segui, Juan; New, Boris
2018-02-01
How long does it take for word reading to become automatic? Does the appearance and development of automaticity differ as a function of orthographic depth (e.g., French vs. English)? These questions were addressed in a longitudinal study of English and French beginning readers. The study focused on automaticity as obligatory processing as measured in the Stroop test. Measures of decoding ability and the Stroop effect were taken at three time points during first grade (and during second grade in the United Kingdom) in 84 children. The study is the first to adjust the classic Stroop effect for inhibition (of distracting colors). The adjusted Stroop effect was zero in the absence of reading ability, and it was found to develop in tandem with decoding ability. After a further control for decoding, no effects of age or orthography were found on the adjusted Stroop measure. The results are in line with theories of the development of whole word recognition that emphasize the importance of the acquisition of the basic orthographic code. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sleeman, Reinoud; van Eck, Torild
1999-06-01
The onset of a seismic signal is determined through joint AR modeling of the noise and the seismic signal, and the application of the Akaike Information Criterion (AIC) using the onset time as parameter. This so-called AR-AIC phase picker has been tested successfully and implemented on the Z-component of the broadband station HGN to provide automatic P-phase picks for a rapid warning system. The AR-AIC picker is shown to provide accurate and robust automatic picks on a large experimental database. Out of 1109 P-phase onsets with signal-to-noise ratio (SNR) above 1 from local, regional and teleseismic earthquakes, our implementation detects 71% and gives a mean difference with manual picks of 0.1 s. An optimal version of the well-established picker of Baer and Kradolfer [Baer, M., Kradolfer, U., An automatic phase picker for local and teleseismic events, Bull. Seism. Soc. Am. 77 (1987) 1437-1445] detects less than 41% and gives a mean difference with manual picks of 0.3 s using the same dataset.
PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui
2018-01-01
To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648
Tomography of a simply magnetized toroidal plasma
NASA Astrophysics Data System (ADS)
Ruggero, BARNI; Stefano, CALDIROLA; Luca, FATTORINI; Claudia, RICCARDI
2018-02-01
Optical emission spectroscopy is a passive diagnostic technique, which does not perturb the plasma state. In particular, in a hydrogen plasma, Balmer-alpha (H α ) emission can be easily measured in the visible range along a line of sight from outside the plasma vessel. Other emission lines in the visible spectral range from hydrogen atoms and molecules can be exploited too, in order to gather complementary pieces of information on the plasma state. Tomography allows us to capture bi-dimensional structures. We propose to adopt an emission spectroscopy tomography for studying the transverse profiles of magnetized plasmas when Abel inversion is not exploitable. An experimental campaign was carried out at the Thorello device, a simple magnetized torus. The characteristics of the profile extraction method, which we implemented for this purpose are discussed, together with a few results concerning the plasma profiles in a simply magnetized torus configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Cell Motility and Jamming across the EMT
NASA Astrophysics Data System (ADS)
Grosser, Steffen; Oswald, Linda; Lippoldt, Jürgen; Heine, Paul; Kaes, Josef A.
We use single-cell tracking and cell shape analysis to highlight the different roles that cell jamming plays in the behaviour of epithelial vs. mesenchymal mammary breast cell lines (MCF-10A, MDA-MB-231) in 2D adherent culture. An automatic segmentation allows for the evaluation of cell shapes, which we compare to predictions made by the self-propelled vertex (SPV) model. On top of that, we employ co-cultures to study the emerging demixing behaviour of these cell lines, demonstrating that the mesenchymal MDA-MB-231 cell line forms unjammed islands within the jammed collective.
Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk
2015-01-01
Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.
Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)
NASA Astrophysics Data System (ADS)
Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.
2011-09-01
During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used at connections to machines of high-pressure hose lines of 1-inch inside diameter or larger, and between high-pressure hose lines of 1-inch inside diameter or larger, where a connection failure would... shall be equipped with automatic pressure-relief valves, pressure gages, and drain valves. (b) Repairs...
Specific PET Imaging Probes for Early Detection of Prostate Cancer Metastases
2010-05-01
penetrating cell membranes. In one of our studies using such a peptide to deliver a therapeutic moiety to various prostate cancer cell lines, we...to exploit this group of peptides for the early detection of prostate tumor metastases. Promisingly, in our preliminary studies , th e peptide lab...cancer. Based on one of our studies using a polyarginine (NH2GR11) to deliver a therapeutic moiety to various prostate cancer cell lines, we hypothesize
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
NASA Astrophysics Data System (ADS)
Takaya, Masaaki; Honda, Hiroyasu; Narita, Yoshihiro; Yamamoto, Fumihiko; Arakawa, Koji
2006-04-01
We report on a newly developed in-service measurement technique that can be used from a central office to find and identify any filter in front of an ONU on an optical fiber access network. Using this system, in-service tests can be performed because the test lights are modulated at a high frequency. Moreover, by using the equipment we developed, this confirmation operation can be performed continuously and automatically with existing automatic fiber testing systems. The developed technique is effective for constructing a fiber line testing system with an optical time domain reflectometer.
Container-code recognition system based on computer vision and deep neural networks
NASA Astrophysics Data System (ADS)
Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao
2018-04-01
Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
Study on Remote Monitoring System of Crossing and Spanning Tangent Tower
NASA Astrophysics Data System (ADS)
Chen, Da-bing; Zhang, Nai-long; Zhang, Meng-ge; Wang, Ze-hua; Zhang, Yan
2017-05-01
In order to grasp the vibration state of overhead transmission line and ensure the operational security of transmission line, the remote monitoring system of crossing and spanning tangent tower was studied. By use of this system, the displacement, velocity and acceleration of the tower, and the local weather data are collected automatically, displayed on computer of remote monitoring centre through wireless network, real-time collection and transmission of vibration signals are realized. The applying results show that the system is excellent in reliability and accuracy and so on. The system can be used to remote monitoring of transmission tower of UHV power transmission lines and in large spanning areas.
Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems.
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006.
Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006. PMID:22163958
Irrelevance in Problem Solving
NASA Technical Reports Server (NTRS)
Levy, Alon Y.
1992-01-01
The notion of irrelevance underlies many different works in AI, such as detecting redundant facts, creating abstraction hierarchies and reformulation and modeling physical devices. However, in order to design problem solvers that exploit the notion of irrelevance, either by automatically detecting irrelevance or by being given knowledge about irrelevance, a formal treatment of the notion is required. In this paper we present a general framework for analyzing irrelevance. We discuss several properties of irrelevance and show how they vary in a space of definitions outlined by the framework. We show how irrelevance claims can be used to justify the creation of abstractions thereby suggesting a new view on the work on abstraction.
An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.
Gali, Hariprasad
2017-10-01
The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.
NASA Astrophysics Data System (ADS)
Behrens, Jörg; Hanke, Moritz; Jahns, Thomas
2014-05-01
In this talk we present a way to facilitate efficient use of MPI communication for developers of climate models. Exploitation of the performance potential of today's highly parallel supercomputers with real world simulations is a complex task. This is partly caused by the low level nature of the MPI communication library which is the dominant communication tool at least for inter-node communication. In order to manage the complexity of the task, climate simulations with non-trivial communication patterns often use an internal abstraction layer above MPI without exploiting the benefits of communication aggregation or MPI-datatypes. The solution for the complexity and performance problem we propose is the communication library YAXT. This library is built on top of MPI and takes high level descriptions of arbitrary domain decompositions and automatically derives an efficient collective data exchange. Several exchanges can be aggregated in order to reduce latency costs. Examples are given which demonstrate the simplicity and the performance gains for selected climate applications.
Fine grained recognition of masonry walls for built heritage assessment
NASA Astrophysics Data System (ADS)
Oses, N.; Dornaika, F.; Moujahid, A.
2015-01-01
This paper presents the ground work carried out to achieve automatic fine grained recognition of stone masonry. This is a necessary first step in the development of the analysis tool. The built heritage that will be assessed consists of stone masonry constructions and many of the features analysed can be characterized according to the geometry and arrangement of the stones. Much of the assessment is carried out through visual inspection. Thus, we apply image processing on digital images of the elements under inspection. The main contribution of the paper is the performance evaluation of the automatic categorization of masonry walls from a set of extracted straight line segments. The element chosen to perform this evaluation is the stone arrangement of masonry walls. The validity of the proposed framework is assessed on real images of masonry walls using machine learning paradigms. These include classifiers as well as automatic feature selection.
Automatic process control in anaerobic digestion technology: A critical review.
Nguyen, Duc; Gadhamshetty, Venkataramana; Nitayavardhana, Saoharit; Khanal, Samir Kumar
2015-10-01
Anaerobic digestion (AD) is a mature technology that relies upon a synergistic effort of a diverse group of microbial communities for metabolizing diverse organic substrates. However, AD is highly sensitive to process disturbances, and thus it is advantageous to use online monitoring and process control techniques to efficiently operate AD process. A range of electrochemical, chromatographic and spectroscopic devices can be deployed for on-line monitoring and control of the AD process. While complexity of the control strategy ranges from a feedback control to advanced control systems, there are some debates on implementation of advanced instrumentations or advanced control strategies. Centralized AD plants could be the answer for the applications of progressive automatic control field. This article provides a critical overview of the available automatic control technologies that can be implemented in AD processes at different scales. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.
2009-12-01
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Early Detection of Severe Apnoea through Voice Analysis and Automatic Speaker Recognition Techniques
NASA Astrophysics Data System (ADS)
Fernández, Ruben; Blanco, Jose Luis; Díaz, David; Hernández, Luis A.; López, Eduardo; Alcázar, José
This study is part of an on-going collaborative effort between the medical and the signal processing communities to promote research on applying voice analysis and Automatic Speaker Recognition techniques (ASR) for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based diagnosis could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we present and discuss the possibilities of using generative Gaussian Mixture Models (GMMs), generally used in ASR systems, to model distinctive apnoea voice characteristics (i.e. abnormal nasalization). Finally, we present experimental findings regarding the discriminative power of speaker recognition techniques applied to severe apnoea detection. We have achieved an 81.25 % correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Takamuku, Shinya; Gomi, Hiroaki
2015-07-22
How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the 'inverse' computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
X-ray Emission Line Anisotropy Effects on the Isoelectronic Temperature Measurement Method
NASA Astrophysics Data System (ADS)
Liedahl, Duane; Barrios, Maria; Brown, Greg; Foord, Mark; Gray, William; Hansen, Stephanie; Heeter, Robert; Jarrott, Leonard; Mauche, Christopher; Moody, John; Schneider, Marilyn; Widmann, Klaus
2016-10-01
Measurements of the ratio of analogous emission lines from isoelectronic ions of two elements form the basis of the isoelectronic method of inferring electron temperatures in laser-produced plasmas, with the expectation that atomic modeling errors cancel to first order. Helium-like ions are a common choice in many experiments. Obtaining sufficiently bright signals often requires sample sizes with non-trivial line optical depths. For lines with small destruction probabilities per scatter, such as the 1s2p-1s2 He-like resonance line, repeated scattering can cause a marked angular dependence in the escaping radiation. Isoelectronic lines from near-Z equimolar dopants have similar optical depths and similar angular variations, which leads to a near angular-invariance for their line ratios. Using Monte Carlo simulations, we show that possible ambiguities associated with anisotropy in deriving electron temperatures from X-ray line ratios are minimized by exploiting this isoelectronic invariance.
Group Dynamics in Automatic Imitation
Wilson, Neil; Reddy, Geetha; Catmur, Caroline
2016-01-01
Imitation–matching the configural body movements of another individual–plays a crucial part in social interaction. We investigated whether automatic imitation is not only influenced by who we imitate (ingroup vs. outgroup member) but also by the nature of an expected interaction situation (competitive vs. cooperative). In line with assumptions from Social Identity Theory), we predicted that both social group membership and the expected situation impact on the level of automatic imitation. We adopted a 2 (group membership target: ingroup, outgroup) x 2 (situation: cooperative, competitive) design. The dependent variable was the degree to which participants imitated the target in a reaction time automatic imitation task. 99 female students from two British Universities participated. We found a significant two-way interaction on the imitation effect. When interacting in expectation of cooperation, imitation was stronger for an ingroup target compared to an outgroup target. However, this was not the case in the competitive condition where imitation did not differ between ingroup and outgroup target. This demonstrates that the goal structure of an expected interaction will determine the extent to which intergroup relations influence imitation, supporting a social identity approach. PMID:27657926
Group Dynamics in Automatic Imitation.
Gleibs, Ilka H; Wilson, Neil; Reddy, Geetha; Catmur, Caroline
Imitation-matching the configural body movements of another individual-plays a crucial part in social interaction. We investigated whether automatic imitation is not only influenced by who we imitate (ingroup vs. outgroup member) but also by the nature of an expected interaction situation (competitive vs. cooperative). In line with assumptions from Social Identity Theory), we predicted that both social group membership and the expected situation impact on the level of automatic imitation. We adopted a 2 (group membership target: ingroup, outgroup) x 2 (situation: cooperative, competitive) design. The dependent variable was the degree to which participants imitated the target in a reaction time automatic imitation task. 99 female students from two British Universities participated. We found a significant two-way interaction on the imitation effect. When interacting in expectation of cooperation, imitation was stronger for an ingroup target compared to an outgroup target. However, this was not the case in the competitive condition where imitation did not differ between ingroup and outgroup target. This demonstrates that the goal structure of an expected interaction will determine the extent to which intergroup relations influence imitation, supporting a social identity approach.
Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application
NASA Astrophysics Data System (ADS)
Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.
2018-03-01
The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.
Quintana, José Benito; Miró, Manuel; Estela, José Manuel; Cerdà, Víctor
2006-04-15
In this paper, the third generation of flow injection analysis, also named the lab-on-valve (LOV) approach, is proposed for the first time as a front end to high-performance liquid chromatography (HPLC) for on-line solid-phase extraction (SPE) sample processing by exploiting the bead injection (BI) concept. The proposed microanalytical system based on discontinuous programmable flow features automated packing (and withdrawal after single use) of a small amount of sorbent (<5 mg) into the microconduits of the flow network and quantitative elution of sorbed species into a narrow band (150 microL of 95% MeOH). The hyphenation of multisyringe flow injection analysis (MSFIA) with BI-LOV prior to HPLC analysis is utilized for on-line postextraction treatment to ensure chemical compatibility between the eluate medium and the initial HPLC gradient conditions. This circumvents the band-broadening effect commonly observed in conventional on-line SPE-based sample processors due to the low eluting strength of the mobile phase. The potential of the novel MSFI-BI-LOV hyphenation for on-line handling of complex environmental and biological samples prior to reversed-phase chromatographic separations was assessed for the expeditious determination of five acidic pharmaceutical residues (viz., ketoprofen, naproxen, bezafibrate, diclofenac, and ibuprofen) and one metabolite (viz., salicylic acid) in surface water, urban wastewater, and urine. To this end, the copolymeric divinylbenzene-co-n-vinylpyrrolidone beads (Oasis HLB) were utilized as renewable sorptive entities in the micromachined unit. The automated analytical method features relative recovery percentages of >88%, limits of detection within the range 0.02-0.67 ng mL(-1), and coefficients of variation <11% for the column renewable mode and gives rise to a drastic reduction in operation costs ( approximately 25-fold) as compared to on-line column switching systems.
Distributed Decision Making Environment.
1982-12-01
Findeisen , F. N. Bailey, M. Brdys, K. Malinowski, P. Tatjewoki and A. Wozniak, Control and Coordination in Hierarchical Systems, New York, NY: Wiley...1977. [99] W. Findeisen et al., "On-line hierarchical control for steady-state systems," IEEE Trans. Automat. Conts., vol. AC-23, no. 2, pp. 189-209
NASA Astrophysics Data System (ADS)
Metzler, Jürgen; Kroschel, Kristian; Willersinn, Dieter
2017-03-01
Monitoring of the heart rhythm is the cornerstone of the diagnosis of cardiac arrhythmias. It is done by means of electrocardiography which relies on electrodes attached to the skin of the patient. We present a new system approach based on the so-called vibrocardiogram that allows an automatic non-contact registration of the heart rhythm. Because of the contactless principle, the technique offers potential application advantages in medical fields like emergency medicine (burn patient) or premature baby care where adhesive electrodes are not easily applicable. A laser-based, mobile, contactless vibrometer for on-site diagnostics that works with the principle of laser Doppler vibrometry allows the acquisition of vital functions in form of a vibrocardiogram. Preliminary clinical studies at the Klinikum Karlsruhe have shown that the region around the carotid artery and the chest region are appropriate therefore. However, the challenge is to find a suitable measurement point in these parts of the body that differs from person to person due to e. g. physiological properties of the skin. Therefore, we propose a new Microsoft Kinect-based approach. When a suitable measurement area on the appropriate parts of the body are detected by processing the Kinect data, the vibrometer is automatically aligned on an initial location within this area. Then, vibrocardiograms on different locations within this area are successively acquired until a sufficient measuring quality is achieved. This optimal location is found by exploiting the autocorrelation function.
Solving the "Hidden Line" Problem
NASA Technical Reports Server (NTRS)
1984-01-01
David Hedgley Jr., a mathematician at Dryden Flight Research Center, has developed an accurate computer program that considers whether a line in a graphic model of a three dimensional object should or should not be visible. The Hidden Line Computer Code, program automatically removes superfluous lines and permits the computer to display an object from specific viewpoints, just as the human eye would see it. Users include Rowland Institute for Science in Cambridge, MA, several departments of Lockheed Georgia Co., and Nebraska Public Power District (NPPD).
38 CFR 36.4352 - Authority to close loans on the automatic basis.
Code of Federal Regulations, 2011 CFR
2011-07-01
...(s) of warehouse lines of credit must be submitted to VA and the applicant must agree that VA may... supporting credit data have been developed on its behalf by a duly authorized agent. (11) Probation. Lenders.... (2) Processing annual lender data. The VA regional office having jurisdiction for the lender's...
Deep Learning-Based Data Forgery Detection in Automatic Generation Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fengli; Li, Qinghua
Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Networkmore » and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.« less
Hustoft, Hanne Kolsrud; Brandtzaeg, Ole Kristian; Rogeberg, Magnus; Misaghian, Dorna; Torsetnes, Silje Bøen; Greibrokk, Tyge; Reubsaet, Léon; Wilson, Steven Ray; Lundanes, Elsa
2013-12-16
Reliable, sensitive and automatable analytical methodology is of great value in e.g. cancer diagnostics. In this context, an on-line system for enzymatic cleavage of proteins, subsequent peptide separation by liquid chromatography (LC) with mass spectrometric detection has been developed using "sub-chip" columns (10-20 μm inner diameter, ID). The system could detect attomole amounts of isolated cancer biomarker progastrin-releasing peptide (ProGRP), in a more automatable fashion compared to previous methods. The workflow combines protein digestion using an 20 μm ID immobilized trypsin reactor with a polymeric layer of 2-hydroxyethyl methacrylate-vinyl azlactone (HEMA-VDM), desalting on a polystyrene-divinylbenzene (PS-DVB) monolithic trap column, and subsequent separation of resulting peptides on a 10 μm ID (PS-DVB) porous layer open tubular (PLOT) column. The high resolution of the PLOT columns was maintained in the on-line system, resulting in narrow chromatographic peaks of 3-5 seconds. The trypsin reactors provided repeatable performance and were compatible with long-term storage.
Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text
NASA Astrophysics Data System (ADS)
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
2015-12-01
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org
Experimental Optimization Methods for Multi-Element Airfoils
NASA Technical Reports Server (NTRS)
Landman, Drew; Britcher, Colin P.
1996-01-01
A modern three element airfoil model with a remotely activated flap was used to investigate optimum flap testing position using an automated optimization algorithm in wind tunnel tests. Detailed results for lift coefficient versus flap vertical and horizontal position are presented for two angles of attack: 8 and 14 degrees. An on-line first order optimizer is demonstrated which automatically seeks the optimum lift as a function of flap position. Future work with off-line optimization techniques is introduced and aerodynamic hysteresis effects due to flap movement with flow on are discussed.
Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway
NASA Astrophysics Data System (ADS)
Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.
2018-05-01
Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
30 CFR 57.13021 - High-pressure hose connections.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false High-pressure hose connections. 57.13021... Air and Boilers § 57.13021 High-pressure hose connections. Except where automatic shutoff valves are...-pressure hose lines of 3/4-inch inside diameter or larger, and between high-pressure hose lines of 3/4-inch...
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.
Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang
2018-06-01
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.
Automated Fabrication Technologies for High Performance Polymer Composites
NASA Technical Reports Server (NTRS)
Shuart , M. J.; Johnston, N. J.; Dexter, H. B.; Marchello, J. M.; Grenoble, R. W.
1998-01-01
New fabrication technologies are being exploited for building high graphite-fiber-reinforced composite structure. Stitched fiber preforms and resin film infusion have been successfully demonstrated for large, composite wing structures. Other automatic processes being developed include automated placement of tacky, drapable epoxy towpreg, automated heated head placement of consolidated ribbon/tape, and vacuum-assisted resin transfer molding. These methods have the potential to yield low cost high performance structures by fabricating composite structures to net shape out-of-autoclave.
ACE Design Study and Experiments
1976-06-01
orthophoto on off-line printer o Automatically compute contours on UNIVAC 1108 and plot on CALCOMP o Manually trace planimetry and drainage from... orthophoto * o Manually edit and trace plotted contours to obtain completed contour manuscript* - Edit errors - Add missing contour detail - Combine...stereomodels - Contours adjusted to drainage chart and spot elevations - Referring to orthophoto , rectified photos, original photos o Normal
Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Meier, Thomas
2016-04-01
Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.
Automatically Producing Accessible Learning Objects
ERIC Educational Resources Information Center
Di Iorio, Angelo; Feliziani, Antonio Angelo; Mirri, Silvia; Salomoni, Paola; Vitali, Fabio
2006-01-01
The "Anywhere, Anytime, Anyway" slogan is frequently associated to e-learning with the aim to emphasize the wide access offered by on-line education. Otherwise, learning materials are currently created to be used with a specific technology or configuration, leaving out from the virtual classroom students who have limited access capabilities and,…
A New KE-Free Online ICALL System Featuring Error Contingent Feedback
ERIC Educational Resources Information Center
Tokuda, Naoyuki; Chen, Liang
2004-01-01
As a first step towards implementing a human language teacher, we have developed a new template-based on-line ICALL (intelligent computer assisted language learning) system capable of automatically diagnosing learners' free-format translated inputs and returning error contingent feedback. The system architecture we have adopted allows language…
Enhanced Cardiac Perception Is Associated with Increased Susceptibility to Framing Effects
ERIC Educational Resources Information Center
Sutterlin, Stefan; Schulz, Stefan M.; Stumpf, Theresa; Pauli, Paul; Vogele, Claus
2013-01-01
Previous studies suggest in line with dual process models that interoceptive skills affect controlled decisions via automatic or implicit processing. The "framing effect" is considered to capture implicit effects of task-irrelevant emotional stimuli on decision-making. We hypothesized that cardiac awareness, as a measure of interoceptive…
Using Affordable Data Capturing Devices for Automatic 3d City Modelling
NASA Astrophysics Data System (ADS)
Alizadehashrafi, B.; Abdul-Rahman, A.
2017-11-01
In this research project, many movies from UTM Kolej 9, Skudai, Johor Bahru (See Figure 1) were taken by AR. Drone 2. Since the AR drone 2.0 has liquid lens, while flying there were significant distortions and deformations on the converted pictures of the movies. Passive remote sensing (RS) applications based on image matching and Epipolar lines such as Agisoft PhotoScan have been tested to create the point clouds and mesh along with 3D models and textures. As the result was not acceptable (See Figure 2), the previous Dynamic Pulse Function based on Ruby programming language were enhanced and utilized to create the 3D models automatically in LoD3. The accuracy of the final 3D model is almost 10 to 20 cm. After rectification and parallel projection of the photos based on some tie points and targets, all the parameters were measured and utilized as an input to the system to create the 3D model automatically in LoD3 in a very high accuracy.
Automatic welding systems gain world-wide acceptance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ives, G. Jr.
1979-04-01
Five automatic welding systems are currently available for commercial use, marketed by three US companies - CRC Automatic Welding Co., H.C. Price Co., and Diametrics Inc. - as well as by Belgium's S.A. Arcos Co. (the Orbimatic welding device) and France's Societe Serimer. The pioneer and leader of the field, CRC has served on 52 projects since 1969, including the 56-in. Orenburg line in the USSR. In comparison, the other systems have seen only limited activity. The Orbimatic welder has been used in the Netherlands and other Western European countries on projects with up to 42-in.-diameter pipe. The H.C. Pricemore » welder proved successful in North Sea construction and last year in Mexico's Troncal Sistema Nacional de Gas. The Diametrics welder relies on the electric flash-butt system used on large-diameter projects in the USSR. The most recent entry into the commerical market, France's Serimer completed field testing last year. Four other welders have recently been announced but are not yet commercially available.« less
Mathematical modeling of control system for the experimental steam generator
NASA Astrophysics Data System (ADS)
Podlasek, Szymon; Lalik, Krzysztof; Filipowicz, Mariusz; Sornek, Krzysztof; Kupski, Robert; Raś, Anita
2016-03-01
A steam generator is an essential unit of each cogeneration system using steam machines. Currently one of the cheapest ways of the steam generation can be application of old steam generators came from army surplus store. They have relatively simple construction and in case of not so exploited units - quite good general conditions, and functionality of mechanical components. By contrast, electrical components and control systems (mostly based on relay automatics) are definitely obsolete. It is not possible to use such units with cooperation of steam bus or with steam engines. In particular, there is no possibility for automatically adjustment of the pressure and the temperature of the generated steam supplying steam engines. Such adjustment is necessary in case of variation of a generator load. The paper is devoted to description of improvement of an exemplary unit together with construction of the measurement-control system based on a PLC. The aim was to enable for communication between the steam generator and controllers of the steam bus and steam engines in order to construction of a complete, fully autonomic and maintenance-free microcogeneration system.
Automatic classification of radiological reports for clinical care.
Gerevini, Alfonso Emilio; Lavelli, Alberto; Maffi, Alessandro; Maroldi, Roberto; Minard, Anne-Lyse; Serina, Ivan; Squassina, Guido
2018-06-07
Radiological reporting generates a large amount of free-text clinical narratives, a potentially valuable source of information for improving clinical care and supporting research. The use of automatic techniques to analyze such reports is necessary to make their content effectively available to radiologists in an aggregated form. In this paper we focus on the classification of chest computed tomography reports according to a classification schema proposed for this task by radiologists of the Italian hospital ASST Spedali Civili di Brescia. The proposed system is built exploiting a training data set containing reports annotated by radiologists. Each report is classified according to the schema developed by radiologists and textual evidences are marked in the report. The annotations are then used to train different machine learning based classifiers. We present in this paper a method based on a cascade of classifiers which make use of a set of syntactic and semantic features. The resulting system is a novel hierarchical classification system for the given task, that we have experimentally evaluated. Copyright © 2018 Elsevier B.V. All rights reserved.
Unified framework for automated iris segmentation using distantly acquired face images.
Tan, Chun-Wei; Kumar, Ajay
2012-09-01
Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.
Coordinating Resource Usage through Adaptive Service Provisioning in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Fok, Chien-Liang; Roman, Gruia-Catalin; Lu, Chenyang
Wireless sensor networks (WSNs) exhibit high levels of network dynamics and consist of devices with limited energy. This results in the need to coordinate applications not only at the functional level, as is traditionally done, but also in terms of resource utilization. In this paper, we present a middleware that does this using adaptive service provisioning. Novel service binding strategies automatically adapt application behavior when opportunities for energy savings surface, and switch providers when the network topology changes. The former is accomplished by providing limited information about the energy consumption associated with using various services, systematically exploiting opportunities for sharing service invocations, and exploiting the broadcast nature of wireless communication in WSNs. The middleware has been implemented and evaluated on two disparate WSN platforms, the TelosB and Imote2. Empirical results show that adaptive service provisioning can enable energy-aware service binding decisions that result in increased energy efficiency and significantly increase service availability, while imposing minimal additional burden on the application, service, and device developers. Two applications, medical patient monitoring and structural health monitoring, demonstrate the middleware's efficacy.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Intelligent form removal with character stroke preservation
NASA Astrophysics Data System (ADS)
Garris, Michael D.
1996-03-01
A new technique for intelligent form removal has been developed along with a new method for evaluating its impact on optical character recognition (OCR). All the dominant lines in the image are automatically detected using the Hough line transform and intelligently erased while simultaneously preserving overlapping character strokes by computing line width statistics and keying off of certain visual cues. This new method of form removal operates on loosely defined zones with no image deskewing. Any field in which the writer is provided a horizontal line to enter a response can be processed by this method. Several examples of processed fields are provided, including a comparison of results between the new method and a commercially available forms removal package. Even if this new form removal method did not improve character recognition accuracy, it is still a significant improvement to the technology because the requirement of a priori knowledge of the form's geometric details has been greatly reduced. This relaxes the recognition system's dependence on rigid form design, printing, and reproduction by automatically detecting and removing some of the physical structures (lines) on the form. Using the National Institute of Standards and Technology (NIST) public domain form-based handprint recognition system, the technique was tested on a large number of fields containing randomly ordered handprinted lowercase alphabets, as these letters (especially those with descenders) frequently touch and extend through the line along which they are written. Preserving character strokes improves overall lowercase recognition performance by 3%, which is a net improvement, but a single performance number like this doesn't communicate how the recognition process was really influenced. There is expected to be trade- offs with the introduction of any new technique into a complex recognition system. To understand both the improvements and the trade-offs, a new analysis was designed to compare the statistical distributions of individual confusion pairs between two systems. As OCR technology continues to improve, sophisticated analyses like this are necessary to reduce the errors remaining in complex recognition problems.
Real-time road detection in infrared imagery
NASA Astrophysics Data System (ADS)
Andre, Haritini E.; McCoy, Keith
1990-09-01
Automatic road detection is an important part in many scene recognition applications. The extraction of roads provides a means of navigation and position update for remotely piloted vehicles or autonomous vehicles. Roads supply strong contextual information which can be used to improve the performance of automatic target recognition (ATh) systems by directing the search for targets and adjusting target classification confidences. This paper will describe algorithmic techniques for labeling roads in high-resolution infrared imagery. In addition, realtime implementation of this structural approach using a processor array based on the Martin Marietta Geometric Arithmetic Parallel Processor (GAPPTh) chip will be addressed. The algorithm described is based on the hypothesis that a road consists of pairs of line segments separated by a distance "d" with opposite gradient directions (antiparallel). The general nature of the algorithm, in addition to its parallel implementation in a single instruction, multiple data (SIMD) machine, are improvements to existing work. The algorithm seeks to identify line segments meeting the road hypothesis in a manner that performs well, even when the side of the road is fragmented due to occlusion or intersections. The use of geometrical relationships between line segments is a powerful yet flexible method of road classification which is independent of orientation. In addition, this approach can be used to nominate other types of objects with minor parametric changes.
A hypertext system that learns from user feedback
NASA Technical Reports Server (NTRS)
Mathe, Nathalie
1994-01-01
Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.
Electro-Optical Inspection For Tolerance Control As An Integral Part Of A Flexible Machining Cell
NASA Astrophysics Data System (ADS)
Renaud, Blaise
1986-11-01
Institut CERAC has been involved in optical metrology and 3-dimensional surface control for the last couple of years. Among the industrial applications considered is the on-line shape evaluation of machined parts within the manufacturing cell. The specific objective is to measure the machining errors and to compare them with the tolerances set by designers. An electro-optical sensing technique has been developed which relies on a projection Moire contouring optical method. A prototype inspection system has been designed, making use of video detection and computer image processing. Moire interferograms are interpreted, and the metrological information automatically retrieved. A structured database can be generated for subsequent data analysis and for real-time closed-loop corrective actions. A real-time kernel embedded into a synchronisation network (Petri-net) for the control of concurrent processes in the Electra-Optical Inspection (E0I) station was realised and implemented in a MODULA-2 program DIN01. The prototype system for on-line automatic tolerance control taking place within a flexible machining cell is described in this paper, together with the fast-prototype synchronisation program.
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
Visual Systems for Teleconferencing: Telewriting, Televideo, and Facsimile.
ERIC Educational Resources Information Center
Olgren, Christine
Telewriting, televideo, and facsimile systems are new forms of teleconferencing which can transmit a variety of graphic and pictorial information on voice-grade telephone lines. All of the equipment employs solid-state circuitry to enhance performance and to exploit the limitations of a narrowband channel. Telewriters range from simple…
Exploiting Discrete Structure for Learning On-Line in Distributed Robot Systems
2009-10-21
accelerating rate over the next 20 years. Service robotics currently shares some important characteristics with the automobile industry in the early...Authorization Act for Fiscal Year 2001, S. 2549, Sec. 217). The same impact is expected for pilotless air and water vehicles, where drone aircraft for
Use of TCSR with Split Windings for Shortening the Spar Cycle Time in 500 kV Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matinyan, A. M., E-mail: al-drm@mail.ru; Peshkov, M. V.; Karpov, V. N.
The arc-fault recharge phenomenon in single-phase automatic reclosure (SPAR) of a line is examined. Abrief description is given of the design of a 500 kV thyristor controlled shunt reactor (TCSR) with split valve-side windings. This type of TCSR is shown to effectively quench a single-phase arc fault in a power transmission line and shortens the SPAR cycle time.
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-01-01
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-04-15
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.
Semi-automatic, octave-spanning optical frequency counter.
Liu, Tze-An; Shu, Ren-Huei; Peng, Jin-Long
2008-07-07
This work presents and demonstrates a semi-automatic optical frequency counter with octave-spanning counting capability using two fiber laser combs operated at different repetition rates. Monochromators are utilized to provide an approximate frequency of the laser under measurement to determine the mode number difference between the two laser combs. The exact mode number of the beating comb line is obtained from the mode number difference and the measured beat frequencies. The entire measurement process, except the frequency stabilization of the laser combs and the optimization of the beat signal-to-noise ratio, is controlled by a computer running a semi-automatic optical frequency counter.
Optoelectronic imaging of speckle using image processing method
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Wang, Pengfei
2018-01-01
A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.
Automatic airline baggage counting using 3D image segmentation
NASA Astrophysics Data System (ADS)
Yin, Deyu; Gao, Qingji; Luo, Qijun
2017-06-01
The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.
On-line multiple component analysis for efficient quantitative bioprocess development.
Dietzsch, Christian; Spadiut, Oliver; Herwig, Christoph
2013-02-20
On-line monitoring devices for the precise determination of a multitude of components are a prerequisite for fast bioprocess quantification. On-line measured values have to be checked for quality and consistency, in order to extract quantitative information from these data. In the present study we characterized a novel on-line sampling and analysis device comprising an automatic photometric robot. We connected this on-line device to a bioreactor and concomitantly measured six components (i.e. glucose, glycerol, ethanol, acetate, phosphate and ammonium) during different batch cultivations of Pichia pastoris. The on-line measured data did not show significant deviations from off-line taken samples and were consequently used for incremental rate and yield calculations. In this respect we highlighted the importance of data quality and discussed the phenomenon of error propagation. On-line calculated rates and yields depicted the physiological responses of the P. pastoris cells in unlimited and limited cultures. A more detailed analysis of the physiological state was possible by considering the off-line determined biomass dry weight and the calculation of specific rates. Here we present a novel device for on-line monitoring of bioprocesses, which ensures high data quality in real-time and therefore refers to a valuable tool for Process Analytical Technology (PAT). Copyright © 2012 Elsevier B.V. All rights reserved.
Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae
2009-01-01
In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007
Multichannel analysis of surface wave method with the autojuggie
Tian, G.; Steeples, D.W.; Xia, J.; Miller, R.D.; Spikes, K.T.; Ralston, M.D.
2003-01-01
The shear (S)-wave velocity of near-surface materials and its effect on seismic-wave propagation are of fundamental interest in many engineering, environmental, and groundwater studies. The multichannel analysis of surface wave (MASW) method provides a robust, efficient, and accurate tool to observe near-surface S-wave velocity. A recently developed device used to place large numbers of closely spaced geophones simultaneously and automatically (the 'autojuggie') is shown here to be applicable to the collection of MASW data. In order to demonstrate the use of the autojuggie in the MASW method, we compared high-frequency surface-wave data acquired from conventionally planted geophones (control line) to data collected in parallel with the automatically planted geophones attached to steel bars (test line). The results demonstrate that the autojuggie can be applied in the MASW method. Implementation of the autojuggie in very shallow MASW surveys could drastically reduce the time required and costs incurred in such surveys. ?? 2003 Elsevier Science Ltd. All rights reserved.
Automatic NMR field-frequency lock-pulsed phase locked loop approach.
Kan, S; Gonord, P; Fan, M; Sauzade, M; Courtieu, J
1978-06-01
A self-contained deuterium frequency-field lock scheme for a high-resolution NMR spectrometer is described. It is based on phase locked loop techniques in which the free induction decay signal behaves as a voltage-controlled oscillator. By pulsing the spins at an offset frequency of a few hundred hertz and using a digital phase-frequency discriminator this method not only eliminates the usual phase, rf power, offset adjustments needed in conventional lock systems but also possesses the automatic pull-in characteristics that dispense with the use of field sweeps to locate the NMR line prior to closure of the lock loop.
[Micron]ADS-B Detect and Avoid Flight Tests on Phantom 4 Unmanned Aircraft System
NASA Technical Reports Server (NTRS)
Arteaga, Ricardo; Dandachy, Mike; Truong, Hong; Aruljothi, Arun; Vedantam, Mihir; Epperson, Kraettli; McCartney, Reed
2018-01-01
Researchers at the National Aeronautics and Space Administration Armstrong Flight Research Center in Edwards, California and Vigilant Aerospace Systems collaborated for the flight-test demonstration of an Automatic Dependent Surveillance-Broadcast based collision avoidance technology on a small unmanned aircraft system equipped with the uAvionix Automatic Dependent Surveillance-Broadcast transponder. The purpose of the testing was to demonstrate that National Aeronautics and Space Administration / Vigilant software and algorithms, commercialized as the FlightHorizon UAS"TM", are compatible with uAvionix hardware systems and the DJI Phantom 4 small unmanned aircraft system. The testing and demonstrations were necessary for both parties to further develop and certify the technology in three key areas: flights beyond visual line of sight, collision avoidance, and autonomous operations. The National Aeronautics and Space Administration and Vigilant Aerospace Systems have developed and successfully flight-tested an Automatic Dependent Surveillance-Broadcast Detect and Avoid system on the Phantom 4 small unmanned aircraft system. The Automatic Dependent Surveillance-Broadcast Detect and Avoid system architecture is especially suited for small unmanned aircraft systems because it integrates: 1) miniaturized Automatic Dependent Surveillance-Broadcast hardware; 2) radio data-link communications; 3) software algorithms for real-time Automatic Dependent Surveillance-Broadcast data integration, conflict detection, and alerting; and 4) a synthetic vision display using a fully-integrated National Aeronautics and Space Administration geobrowser for three dimensional graphical representations for ownship and air traffic situational awareness. The flight-test objectives were to evaluate the performance of Automatic Dependent Surveillance-Broadcast Detect and Avoid collision avoidance technology as installed on two small unmanned aircraft systems. In December 2016, four flight tests were conducted at Edwards Air Force Base. Researchers in the ground control station looking at displays were able to verify the Automatic Dependent Surveillance-Broadcast target detection and collision avoidance resolutions.
Sentinel-2 for rapid operational landslide inventory mapping
NASA Astrophysics Data System (ADS)
Stumpf, André; Marc, Odin; Malet, Jean-Philippe; Michea, David
2017-04-01
Landslide inventory mapping after major triggering events such as heavy rainfalls or earthquakes is crucial for disaster response, the assessment of hazards, and the quantification of sediment budgets and empirical scaling laws. Numerous studies have already demonstrated the utility of very-high resolution satellite and aerial images for the elaboration of inventories based on semi-automatic methods or visual image interpretation. Nevertheless, such semi-automatic methods are rarely used in an operational context after major triggering events; this is partly due to access limitations on the required input datasets (i.e. VHR satellite images) and to the absence of dedicated services (i.e. processing chain) available for the landslide community. Several on-going initiatives allow to overcome these limitations. First, from a data perspective, the launch of the Sentinel-2 mission offers opportunities for the design of an operational service that can be deployed for landslide inventory mapping at any time and everywhere on the globe. Second, from an implementation perspective, the Geohazards Exploitation Platform (GEP) of the European Space Agency (ESA) allows the integration and diffusion of on-line processing algorithms in a high computing performance environment. Third, from a community perspective, the recently launched Landslide Pilot of the Committee on Earth Observation Satellites (CEOS), has targeted the take-off of such service as a main objective for the landslide community. Within this context, this study targets the development of a largely automatic, supervised image processing chain for landslide inventory mapping from bi-temporal (before and after a given event) Sentinel-2 optical images. The processing chain combines change detection methods, image segmentation, higher-level image features (e.g. texture, shape) and topographic variables. Based on a few representative examples provided by a human operator, a machine learning model is trained and subsequently used to distinguish newly triggered landslides from other landscape elements. The final map product is provided along with an uncertainty map that allows identifying areas which might require further considerations. The processing chain is tested for two recent and contrasted triggering events in New Zealand and Taiwan. A Mw 7.8 earthquake in New Zealand in November 2016 triggered tens of thousands of landslides in a complex environment, with important textural variations with elevations, due to vegetation change and snow cover. In contrast a large but unexceptional typhoon in July 2016 in Taiwan triggered a moderate amount of relatively small landslides in a lushly vegetated, more homogenous terrain. Based on the obtained results we discuss the potential and limitations of Sentinel-2 bi-temporal images and time-series for operational landslide inventory mapping This work is part of the General Studies Program (GSP) ALCANTARA of ESA.
NASA Astrophysics Data System (ADS)
Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.
2009-07-01
Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
NASA Astrophysics Data System (ADS)
Hussnain, Zille; Oude Elberink, Sander; Vosselman, George
2016-06-01
In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.
Feature extraction and classification of clouds in high resolution panchromatic satellite imagery
NASA Astrophysics Data System (ADS)
Sharghi, Elan
The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.
NASA Astrophysics Data System (ADS)
Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele
2013-02-01
Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.
Automatic non-destructive system for quality assurance of welded elements in the aircraft industry
NASA Astrophysics Data System (ADS)
Chady, Tomasz; Waszczuk, Paweł; Szydłowski, Michał; Szwagiel, Mariusz
2018-04-01
Flaws that might be a result of the welding process have to be detected, in order to assure high quality thus reliability of elements exploited in aircraft industry. Currently the inspection stage is conducted manually by a qualified workforce. There are no commercially available systems that could support or replace humans in the flaw detection process. In this paper authors present a novel non-destructive system developed for quality assurance purposes of welded elements utilized in the aircraft industry.
Automated meter reading. (Latest citations from the INSPEC database). Published Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-12-01
The bibliography contains citations concerning the automatic collection of data from utility meters. Citations focus on line carrier communications, radio communications, and telecommunication methods of data transmission. Applications for water, gas, and electric power meters are discussed. (Contains a minimum of 56 citations and includes a subject term index and title list.)
76 FR 66220 - Automatic Underfrequency Load Shedding and Load Shedding Plans Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
..., EPRI Power Systems Dynamics Tutorial, Chapter 4 at page 4-78 (2009), available at http://www.epri.com.... Power systems consist of static components (e.g., transformers and transmission lines) and dynamic... decisions on simulations, both static and dynamic, using area power system models to meet requirements in...
Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks
ERIC Educational Resources Information Center
Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo
2017-01-01
Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…
NASA Astrophysics Data System (ADS)
Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.
1990-08-01
In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.
Development and Application of On-line Monitor for the ZLW-1 Axis Cracks
NASA Astrophysics Data System (ADS)
Shi-jun, Yang; Qian-hui, Yang; Jian-guo, Jin
2018-03-01
This article mainly introduces a method that uses acoustic emission techniques to achieve on-line monitor for the shaft cracks and crack growth. According to this method, axis crack monitor is produced by acoustic emission techniques. This instrument can apply to all the pressure vessels, pipelines and rotor machines that can bear buckling load. It has the online real-time monitoring, automatic recording, printing, sound and light alarm, collecting crack information function. After a series of tests in both laboratory and field, it shows that this instrument is very versatile and possesses broad prospects of development and application.
Stopa, Marcin; Marciniak, Elżbieta; Rakowicz, Piotr; Stankiewicz, Agnieszka; Marciniak, Tomasz; Dąbrowski, Adam
2017-10-01
To evaluate a new method for volumetric imaging of the preretinal space (also known as the subhyaloid, subcortical, or retrocortical space) and investigate differences in preretinal space volume in vitreomacular adhesion (VMA) and vitreomacular traction (VMT). Nine patients with VMA and 13 with VMT were prospectively evaluated. Automatic inner limiting membrane line segmentation, which exploits graph search theory implementation, and posterior cortical vitreous line segmentation were performed on 141 horizontal spectral domain optical coherence tomography B-scans per patient. Vertical distances (depths) between the posterior cortical vitreous and inner limiting membrane lines were calculated for each optical coherence tomography B-scan acquired. The derived distances were merged and visualized as a color depth map that represented the preretinal space between the posterior surface of the hyaloid and the anterior surface of the retina. The early treatment d retinopathy study macular map was overlaid onto final virtual maps, and preretinal space volumes were calculated for each early treatment diabetic retinopathy study map sector. Volumetric maps representing preretinal space volumes were created for each patient in the VMA and VMT groups. Preretinal space volumes were larger in all early treatment diabetic retinopathy study map macular regions in the VMT group compared with those in the VMA group. The differences reached statistical significance in all early treatment diabetic retinopathy study sectors, except for the superior outer macula and temporal outer macula where significance values were P = 0.05 and P = 0.08, respectively. Overall, the relative differences in preretinal space volumes between the VMT and VMA groups varied from 2.7 to 4.3 in inner regions and 1.8 to 2.9 in outer regions. Our study provides evidence of significant differences in preretinal space volume between eyes with VMA and those with VMT. This may be useful not only in the investigation of preretinal space properties in VMA and VMT, but also in other conditions, such as age-related macular degeneration, diabetic retinopathy, and central retinal vein occlusion.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
ABI Base Recall: Automatic Correction and Ends Trimming of DNA Sequences.
Elyazghi, Zakaria; Yazouli, Loubna El; Sadki, Khalid; Radouani, Fouzia
2017-12-01
Automated DNA sequencers produce chromatogram files in ABI format. When viewing chromatograms, some ambiguities are shown at various sites along the DNA sequences, because the program implemented in the sequencing machine and used to call bases cannot always precisely determine the right nucleotide, especially when it is represented by either a broad peak or a set of overlaying peaks. In such cases, a letter other than A, C, G, or T is recorded, most commonly N. Thus, DNA sequencing chromatograms need manual examination: checking for mis-calls and truncating the sequence when errors become too frequent. The purpose of this paper is to develop a program allowing the automatic correction of these ambiguities. This application is a Web-based program powered by Shiny and runs under R platform for an easy exploitation. As a part of the interface, we added the automatic ends clipping option, alignment against reference sequences, and BLAST. To develop and test our tool, we collected several bacterial DNA sequences from different laboratories within Institut Pasteur du Maroc and performed both manual and automatic correction. The comparison between the two methods was carried out. As a result, we note that our program, ABI base recall, accomplishes good correction with a high accuracy. Indeed, it increases the rate of identity and coverage and minimizes the number of mismatches and gaps, hence it provides solution to sequencing ambiguities and saves biologists' time and labor.
Antila, Kari; Nieminen, Heikki J; Sequeiros, Roberto Blanco; Ehnholm, Gösta
2014-07-01
Up to 25% of women suffer from uterine fibroids (UF) that cause infertility, pain, and discomfort. MR-guided high intensity focused ultrasound (MR-HIFU) is an emerging technique for noninvasive, computer-guided thermal ablation of UFs. The volume of induced necrosis is a predictor of the success of the treatment. However, accurate volume assessment by hand can be time consuming, and quick tools produce biased results. Therefore, fast and reliable tools are required in order to estimate the technical treatment outcome during the therapy event so as to predict symptom relief. A novel technique has been developed for the segmentation and volume assessment of the treated region. Conventional algorithms typically require user interaction ora priori knowledge of the target. The developed algorithm exploits the treatment plan, the coordinates of the intended ablation, for fully automatic segmentation with no user input. A good similarity to an expert-segmented manual reference was achieved (Dice similarity coefficient = 0.880 ± 0.074). The average automatic segmentation time was 1.6 ± 0.7 min per patient against an order of tens of minutes when done manually. The results suggest that the segmentation algorithm developed, requiring no user-input, provides a feasible and practical approach for the automatic evaluation of the boundary and volume of the HIFU-treated region.
1977-05-01
C31) programs; (4) simulator/ trainer programs ; and (5) automatic test equipment software. Each of these five types of software represents a problem...coded in the same source language, say JOVIAL, then source—language statements would be a better measure, since that would automatically compensate...whether done at no (visible) cost or by renegotiation of the contract. Fig. 2.3 illustrates these with solid lines. It is conjec- tured that the change
Automatic differentiation for Fourier series and the radii polynomial approach
NASA Astrophysics Data System (ADS)
Lessard, Jean-Philippe; Mireles James, J. D.; Ransford, Julian
2016-11-01
In this work we develop a computer-assisted technique for proving existence of periodic solutions of nonlinear differential equations with non-polynomial nonlinearities. We exploit ideas from the theory of automatic differentiation in order to formulate an augmented polynomial system. We compute a numerical Fourier expansion of the periodic orbit for the augmented system, and prove the existence of a true solution nearby using an a-posteriori validation scheme (the radii polynomial approach). The problems considered here are given in terms of locally analytic vector fields (i.e. the field is analytic in a neighborhood of the periodic orbit) hence the computer-assisted proofs are formulated in a Banach space of sequences satisfying a geometric decay condition. In order to illustrate the use and utility of these ideas we implement a number of computer-assisted existence proofs for periodic orbits of the Planar Circular Restricted Three-Body Problem (PCRTBP).
3D exploitation of large urban photo archives
NASA Astrophysics Data System (ADS)
Cho, Peter; Snavely, Noah; Anderson, Ross
2010-04-01
Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.
Heat exchanger for solar water heaters
NASA Technical Reports Server (NTRS)
Cash, M.; Krupnick, A. C.
1977-01-01
Proposed efficient double-walled heat exchanger prevents contamination of domestic water supply lines and indicates leakage automatically in solar as well as nonsolar heat sources using water as heat transfer medium.
Chochois, Vincent; Vogel, John P; Rebetzke, Gregory J; Watt, Michelle
2015-07-01
Seedling roots enable plant establishment. Their small phenotypes are measured routinely. Adult root systems are relevant to yield and efficiency, but phenotyping is challenging. Root length exceeds the volume of most pots. Field studies measure partial adult root systems through coring or use seedling roots as adult surrogates. Here, we phenotyped 79 diverse lines of the small grass model Brachypodium distachyon to adults in 50-cm-long tubes of soil with irrigation; a subset of 16 lines was droughted. Variation was large (total biomass, ×8; total root length [TRL], ×10; and root mass ratio, ×6), repeatable, and attributable to genetic factors (heritabilities ranged from approximately 50% for root growth to 82% for partitioning phenotypes). Lines were dissected into seed-borne tissues (stem and primary seminal axile roots) and stem-borne tissues (tillers and coleoptile and leaf node axile roots) plus branch roots. All lines developed one seminal root that varied, with branch roots, from 31% to 90% of TRL in the well-watered condition. With drought, 100% of TRL was seminal, regardless of line because nodal roots were almost always inhibited in drying topsoil. Irrigation stimulated nodal roots depending on genotype. Shoot size and tillers correlated positively with roots with irrigation, but partitioning depended on genotype and was plastic with drought. Adult root systems of B. distachyon have genetic variation to exploit to increase cereal yields through genes associated with partitioning among roots and their responsiveness to irrigation. Whole-plant phenotypes could enhance gain for droughted environments because root and shoot traits are coselected. © 2015 American Society of Plant Biologists. All Rights Reserved.
The MATCHIT Automaton: Exploiting Compartmentalization for the Synthesis of Branched Polymers
Weyland, Mathias S.; Fellermann, Harold; Hadorn, Maik; Sorek, Daniel; Lancet, Doron; Rasmussen, Steen; Füchslin, Rudolf M.
2013-01-01
We propose an automaton, a theoretical framework that demonstrates how to improve the yield of the synthesis of branched chemical polymer reactions. This is achieved by separating substeps of the path of synthesis into compartments. We use chemical containers (chemtainers) to carry the substances through a sequence of fixed successive compartments. We describe the automaton in mathematical terms and show how it can be configured automatically in order to synthesize a given branched polymer target. The algorithm we present finds an optimal path of synthesis in linear time. We discuss how the automaton models compartmentalized structures found in cells, such as the endoplasmic reticulum and the Golgi apparatus, and we show how this compartmentalization can be exploited for the synthesis of branched polymers such as oligosaccharides. Lastly, we show examples of artificial branched polymers and discuss how the automaton can be configured to synthesize them with maximal yield. PMID:24489601
Computer assisted operations in Petroleum Development Oman (PDO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Hinai, S.H.; Mutimer, K.
1995-10-01
Petroleum Development Oman (PDO) currently produces some 750,000 bopd and 900,000 bwpd from some 74 fields in a large geographical area and diverse operating conditions. A key corporate objective is to reduce operating costs by exploiting productivity gains from proven technology. Automation is seen as a means of managing the rapid growth of well population and production facilities. the overall objective is to improve field management through continuous monitoring of wells and facilities and dissemination of data throughout the whole organization. A major upgrade of PDO`s field Supervisory Control and Data Acquisition (SCADA) system is complete providing a platform tomore » exploit new initiatives particularly for production optimization of artificial lift systems and automatic well testing using multi selector valves, coriolis flow meter measurements and multi component (oil, gas, water) flowmeter. The paper describes PDO`s experience including benefits and challenges which have to be managed when developing Computer Assisted Operations (CAO).« less
NASA Astrophysics Data System (ADS)
He, Di; Lim, Boon Pang; Yang, Xuesong; Hasegawa-Johnson, Mark; Chen, Deming
2018-06-01
Most mainstream Automatic Speech Recognition (ASR) systems consider all feature frames equally important. However, acoustic landmark theory is based on a contradictory idea, that some frames are more important than others. Acoustic landmark theory exploits quantal non-linearities in the articulatory-acoustic and acoustic-perceptual relations to define landmark times at which the speech spectrum abruptly changes or reaches an extremum; frames overlapping landmarks have been demonstrated to be sufficient for speech perception. In this work, we conduct experiments on the TIMIT corpus, with both GMM and DNN based ASR systems and find that frames containing landmarks are more informative for ASR than others. We find that altering the level of emphasis on landmarks by re-weighting acoustic likelihood tends to reduce the phone error rate (PER). Furthermore, by leveraging the landmark as a heuristic, one of our hybrid DNN frame dropping strategies maintained a PER within 0.44% of optimal when scoring less than half (45.8% to be precise) of the frames. This hybrid strategy out-performs other non-heuristic-based methods and demonstrate the potential of landmarks for reducing computation.
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan
2012-01-01
Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578
Asbestos Workshop: Sampling, Analysis, and Risk Assessment
2012-03-01
coatings Vinyl/asbestos floor tile Automatic transmission components Clutch facings Disc brake pads Drum brake linings Brake blocks Commercial and...A naturally-occurring pliant and fibrous mineral with heat-resistant properties • Serpentine Class: joint compound,‘popcorn’ceilings, brake pads...fabrics, and is used in fire-resistant and insulating materials such as brake linings. The asbestos minerals include chrysotile (white asbestos) and
The VIRUS Emission Line Detection Recipe
NASA Astrophysics Data System (ADS)
Gössl, C. A.; Hopp, U.; Köhler, R.; Grupp, F.; Relke, H.; Drory, N.; Gebhardt, K.; Hill, G.; MacQueen, P.
2007-10-01
HETDEX, the Hobby-Eberly Telescope Dark Energy Experiment, will measure the imprint of the baryonic acoustic oscillations on the galaxy population at redshifts of 1.8 < z < 3.7 to constrain the nature of dark energy. The survey will be performed over at least 200 deg^2. The tracer population for this blind search will be Ly-α emitting galaxies through their most prominent emission line. The data reduction pipeline will extract these emission line objects from ˜35,000 spectra per exposure (5 million per night, i.e. 500 million in total) while performing astrometric, photometric, and wavelength calibration fully automatically. Here we will present our ideas how to find and classify objects even at low signal-to-noise ratios.
Go, Taesik; Byeon, Hyeokjun; Lee, Sang Joon
2018-04-30
Cell types of erythrocytes should be identified because they are closely related to their functionality and viability. Conventional methods for classifying erythrocytes are time consuming and labor intensive. Therefore, an automatic and accurate erythrocyte classification system is indispensable in healthcare and biomedical fields. In this study, we proposed a new label-free sensor for automatic identification of erythrocyte cell types using a digital in-line holographic microscopy (DIHM) combined with machine learning algorithms. A total of 12 features, including information on intensity distributions, morphological descriptors, and optical focusing characteristics, is quantitatively obtained from numerically reconstructed holographic images. All individual features for discocytes, echinocytes, and spherocytes are statistically different. To improve the performance of cell type identification, we adopted several machine learning algorithms, such as decision tree model, support vector machine, linear discriminant classification, and k-nearest neighbor classification. With the aid of these machine learning algorithms, the extracted features are effectively utilized to distinguish erythrocytes. Among the four tested algorithms, the decision tree model exhibits the best identification performance for the training sets (n = 440, 98.18%) and test sets (n = 190, 97.37%). This proposed methodology, which smartly combined DIHM and machine learning, would be helpful for sensing abnormal erythrocytes and computer-aided diagnosis of hematological diseases in clinic. Copyright © 2017 Elsevier B.V. All rights reserved.
A data reduction package for multiple object spectroscopy
NASA Technical Reports Server (NTRS)
Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.
1986-01-01
Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.
NASA Astrophysics Data System (ADS)
Di Tullio, M.; Nocchi, F.; Camplani, A.; Emanuelli, N.; Nascetti, A.; Crespi, M.
2018-04-01
The glaciers are a natural global resource and one of the principal climate change indicator at global and local scale, being influenced by temperature and snow precipitation changes. Among the parameters used for glacier monitoring, the surface velocity is a key element, since it is connected to glaciers changes (mass balance, hydro balance, glaciers stability, landscape erosion). The leading idea of this work is to continuously retrieve glaciers surface velocity using free ESA Sentinel-1 SAR imagery and exploiting the potentialities of the Google Earth Engine (GEE) platform. GEE has been recently released by Google as a platform for petabyte-scale scientific analysis and visualization of geospatial datasets. The algorithm of SAR off-set tracking developed at the Geodesy and Geomatics Division of the University of Rome La Sapienza has been integrated in a cloud based platform that automatically processes large stacks of Sentinel-1 data to retrieve glacier surface velocity field time series. We processed about 600 Sentinel-1 image pairs to obtain a continuous time series of velocity field measurements over 3 years from January 2015 to January 2018 for two wide glaciers located in the Northern Patagonian Ice Field (NPIF), the San Rafael and the San Quintin glaciers. Several results related to these relevant glaciers also validated with respect already available and renown software (i.e. ESA SNAP, CIAS) and with respect optical sensor measurements (i.e. LANDSAT8), highlight the potential of the Big Data analysis to automatically monitor glacier surface velocity fields at global scale, exploiting the synergy between GEE and Sentinel-1 imagery.
Image/text automatic indexing and retrieval system using context vector approach
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick
1995-11-01
Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Through-barrier electromagnetic imaging with an atomic magnetometer.
Deans, Cameron; Marmugi, Luca; Renzoni, Ferruccio
2017-07-24
We demonstrate the penetration of thick metallic and ferromagnetic barriers for imaging of conductive targets underneath. Our system is based on an 85 Rb radio-frequency atomic magnetometer operating in electromagnetic induction imaging modality in an unshielded environment. Detrimental effects, including unpredictable magnetic signatures from ferromagnetic screens and variations in the magnetic background, are automatically compensated by active compensation coils controlled by servo loops. We exploit the tunability and low-frequency sensitivity of the atomic magnetometer to directly image multiple conductive targets concealed by a 2.5 mm ferromagnetic steel shield and/or a 2.0 mm aluminium shield, in a single scan. The performance of the atomic magnetometer allows imaging without any prior knowledge of the barriers or the targets, and without the need of background subtraction. A dedicated edge detection algorithm allows automatic estimation of the targets' size within 3.3 mm and of their position within 2.4 mm. Our results prove the feasibility of a compact, sensitive and automated sensing platform for imaging of concealed objects in a range of applications, from security screening to search and rescue.
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
Using stochastic activity networks to study the energy feasibility of automatic weather stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cassano, Luca; Cesarini, Daniel; Avvenuti, Marco
Automatic Weather Stations (AWSs) are systems equipped with a number of environmental sensors and communication interfaces used to monitor harsh environments, such as glaciers and deserts. Designing such systems is challenging, since designers have to maximize the amount of sampled and transmitted data while considering the energy needs of the system that, in most cases, is powered by rechargeable batteries and exploits energy harvesting, e.g., solar cells and wind turbines. To support designers of AWSs in the definition of the software tasks and of the hardware configuration of the AWS we designed and implemented an energy-aware simulator of such systems.more » The simulator relies on the Stochastic Activity Networks (SANs) formalism and has been developed using the Möbius tool. In this paper we first show how we used the SAN formalism to model the various components of an AWS, we then report results from an experiment carried out to validate the simulator against a real-world AWS and we finally show some examples of usage of the proposed simulator.« less
A new generation scanning system for the high-speed analysis of nuclear emulsions
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Galati, G.; Lauria, A.; Montesi, M. C.; Tioukov, V.; Vladymyrov, M.
2016-06-01
The development of automatic scanning systems was a fundamental issue for large scale neutrino detectors exploiting nuclear emulsions as particle trackers. Such systems speed up significantly the event analysis in emulsion, allowing the feasibility of experiments with unprecedented statistics. In the early 1990s, R&D programs were carried out by Japanese and European laboratories leading to automatic scanning systems more and more efficient. The recent progress in the technology of digital signal processing and of image acquisition allows the fulfillment of new systems with higher performances. In this paper we report the description and the performance of a new generation scanning system able to operate at the record speed of 84 cm2/hour and based on the Large Angle Scanning System for OPERA (LASSO) software infrastructure developed by the Naples scanning group. Such improvement, reduces the scanning time by a factor 4 with respect to the available systems, allowing the readout of huge amount of nuclear emulsions in reasonable time. This opens new perspectives for the employment of such detectors in a wider variety of applications.
NASA Astrophysics Data System (ADS)
Menzel, R.; Paynter, D.; Jones, A. L.
2017-12-01
Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worm, Esben S., E-mail: esbeworm@rm.dk; Department of Medical Physics, Aarhus University Hospital, Aarhus; Hoyer, Morten
2012-05-01
Purpose: To develop and evaluate accurate and objective on-line patient setup based on a novel semiautomatic technique in which three-dimensional marker trajectories were estimated from two-dimensional cone-beam computed tomography (CBCT) projections. Methods and Materials: Seven treatment courses of stereotactic body radiotherapy for liver tumors were delivered in 21 fractions in total to 6 patients by a linear accelerator. Each patient had two to three gold markers implanted close to the tumors. Before treatment, a CBCT scan with approximately 675 two-dimensional projections was acquired during a full gantry rotation. The marker positions were segmented in each projection. From this, the three-dimensionalmore » marker trajectories were estimated using a probability based method. The required couch shifts for patient setup were calculated from the mean marker positions along the trajectories. A motion phantom moving with known tumor trajectories was used to examine the accuracy of the method. Trajectory-based setup was retrospectively used off-line for the first five treatment courses (15 fractions) and on-line for the last two treatment courses (6 fractions). Automatic marker segmentation was compared with manual segmentation. The trajectory-based setup was compared with setup based on conventional CBCT guidance on the markers (first 15 fractions). Results: Phantom measurements showed that trajectory-based estimation of the mean marker position was accurate within 0.3 mm. The on-line trajectory-based patient setup was performed within approximately 5 minutes. The automatic marker segmentation agreed with manual segmentation within 0.36 {+-} 0.50 pixels (mean {+-} SD; pixel size, 0.26 mm in isocenter). The accuracy of conventional volumetric CBCT guidance was compromised by motion smearing ({<=}21 mm) that induced an absolute three-dimensional setup error of 1.6 {+-} 0.9 mm (maximum, 3.2) relative to trajectory-based setup. Conclusions: The first on-line clinical use of trajectory estimation from CBCT projections for precise setup in stereotactic body radiotherapy was demonstrated. Uncertainty in the conventional CBCT-based setup procedure was eliminated with the new method.« less
Phase coherence adaptive processor for automatic signal detection and identification
NASA Astrophysics Data System (ADS)
Wagstaff, Ronald A.
2006-05-01
A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.
Nuclear reactor shutdown control rod assembly
Bilibin, Konstantin
1988-01-01
A temperature responsive, self-actuated nuclear reactor shutdown control rod assembly 10. The upper end 18 of a lower drive line 17 fits within the lower end of an upper drive line 12. The lower end (not shown) of the lower drive line 17 is connected to a neutron absorber. During normal temperature conditions the lower drive line 17 is supported by detent means 22,26. When an overtemperature condition occurs thermal actuation means 34 urges ring 26 upwardly sufficiently to allow balls 22 to move radially outwardly thereby allowing lower drive line 17 to move downwardly toward the core of the nuclear reactor resulting in automatic reduction of the reactor powder.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
Turning the LHC ring into a new physics search machine
NASA Astrophysics Data System (ADS)
Orava, Risto
2017-03-01
The LHC Collider Ring is proposed to be turned into an ultimate automatic search engine for new physics in four consecutive phases: (1) Searches for heavy particles produced in Central Exclusive Process (CEP): pp → p + X + p based on the existing Beam Loss Monitoring (BLM) system of the LHC; (2) Feasibility study of using the LHC Ring as a gravitation wave antenna; (3) Extensions to the current BLM system to facilitate precise registration of the selected CEP proton exit points from the LHC beam vacuum chamber; (4) Integration of the BLM based event tagging system together with the trigger/data acquisition systems of the LHC experiments to facilitate an on-line automatic search machine for the physics of tomorrow.
NASA Astrophysics Data System (ADS)
Zhang, Zaixuan; Wang, Kequan; Kim, Insoo S.; Wang, Jianfeng; Feng, Haiqi; Guo, Ning; Yu, Xiangdong; Zhou, Bangquan; Wu, Xiaobiao; Kim, Yohee
2000-05-01
The DOFTS system that has applied to temperature automatically alarm system of coal mine and tunnel has been researched. It is a real-time, on line and multi-point measurement system. The wavelength of LD is 1550 nm, on the 6 km optical fiber, 3000 points temperature signal is sampled and the spatial position is certain. Temperature measured region: -50 degree(s)C--100 degree(s)C; measured uncertain value: +/- 3 degree(s)C; temperature resolution: 0.1 degree(s)C; spatial resolution: <5 cm (optical fiber sensor probe); <8 m (spread optical fiber); measured time: <70 s. In the paper, the operated principles, underground test, test content and practical test results have been discussed.
Bonaccorsi, Manuele; Betti, Stefano; Rateni, Giovanni; Esposito, Dario; Brischetto, Alessia; Marseglia, Marco; Dario, Paolo; Cavallo, Filippo
2017-01-01
This paper introduces HighChest, an innovative smart freezer designed to promote energy efficient behavior and the responsible use of food. Introducing a novel human–machine interface (HMI) design developed through assessment phases and a user involvement stage, HighChest is state of the art, featuring smart services that exploit embedded sensors and Internet of things functionalities, which enhance the local capabilities of the appliance. The industrial design thinking approach followed for the advanced HMI is intended to maximize the social impact of the food management service, enhancing both the user experience of the product and the user’s willingness to adopt eco- and energy-friendly behaviors. The sensor equipment realizes automatic recognition of food by learning from the users, as well as automatic localization inside the deposit space. Moreover, it provides monitoring of the appliance’s usage, avoiding temperature and humidity issues related to improper use. Experimental tests were conducted to evaluate the localization system, and the results showed 100% accuracy for weights greater or equal to 0.5 kg. Drifts due to the lid opening and prolonged usage time were also measured, to implement automatic reset corrections. PMID:28604609
Optimized hardware framework of MLP with random hidden layers for classification applications
NASA Astrophysics Data System (ADS)
Zyarah, Abdullah M.; Ramesh, Abhishek; Merkel, Cory; Kudithipudi, Dhireesha
2016-05-01
Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
Study on feed forward neural network convex optimization for LiFePO4 battery parameters
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.
On-Orbit MTF Measurement and Product Quality Monitoring for Commercial Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Person, Steven
2007-01-01
Initialization and opportunistic targets are chosen that represent the MTF on the spatial domain. Ideal targets have simple mathematical relationships. Determine the MTF of an on-orbit satellite using in-scene targets: Slant-Edge, Line Source, point Source, and Radial Target. Attempt to facilitate the MTF calculation by automatically locating targets of opportunity. Incorporate MTF results into a product quality monitoring architecture.
NASA Astrophysics Data System (ADS)
Škoda, Petr; Palička, Andrej; Koza, Jakub; Shakurova, Ksenia
2017-06-01
The current archives of LAMOST multi-object spectrograph contain millions of fully reduced spectra, from which the automatic pipelines have produced catalogues of many parameters of individual objects, including their approximate spectral classification. This is, however, mostly based on the global shape of the whole spectrum and on integral properties of spectra in given bandpasses, namely presence and equivalent width of prominent spectral lines, while for identification of some interesting object types (e.g. Be stars or quasars) the detailed shape of only a few lines is crucial. Here the machine learning is bringing a new methodology capable of improving the reliability of classification of such objects even in boundary cases. We present results of Spark-based semi-supervised machine learning of LAMOST spectra attempting to automatically identify the single and double-peak emission of Hα line typical for Be and B[e] stars. The labelled sample was obtained from archive of 2m Perek telescope at Ondřejov observatory. A simple physical model of spectrograph resolution was used in domain adaptation to LAMOST training domain. The resulting list of candidates contains dozens of Be stars (some are likely yet unknown), but also a bunch of interesting objects resembling spectra of quasars and even blazars, as well as many instrumental artefacts. The verification of a nature of interesting candidates benefited considerably from cross-matching and visualisation in the Virtual Observatory environment.
Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Nakamura, K.
2016-06-01
Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.
Single transmission line data acquisition system
Fasching, George E.
1984-01-01
A single transmission line interrogated multiple channel data acquisition system is provided in which a plurality of remote station/sensors monitor specific process variables and transmit measurement values over the single transmission line to a master station when addressed by the master station. Power for all remote stations (up to 980) is provided by driving the line with constant voltage supplied from the master station and automatically maintained independent of the number of remote stations directly connected to the line. The transmission line can be an RG-62 coaxial cable with lengths up to about 10,000 feet with branches up to 500 feet. The remote stations can be attached randomly along the line. The remote stations can be scanned at rates up to 980 channels/second.
Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H
2014-01-01
A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
Zhang, Ling; Chen, Siping; Chin, Chien Ting; Wang, Tianfu; Li, Shengli
2012-08-01
To assist radiologists and decrease interobserver variability when using 2D ultrasonography (US) to locate the standardized plane of early gestational sac (SPGS) and to perform gestational sac (GS) biometric measurements. In this paper, the authors report the design of the first automatic solution, called "intelligent scanning" (IS), for selecting SPGS and performing biometric measurements using real-time 2D US. First, the GS is efficiently and precisely located in each ultrasound frame by exploiting a coarse to fine detection scheme based on the training of two cascade AdaBoost classifiers. Next, the SPGS are automatically selected by eliminating false positives. This is accomplished using local context information based on the relative position of anatomies in the image sequence. Finally, a database-guided multiscale normalized cuts algorithm is proposed to generate the initial contour of the GS, based on which the GS is automatically segmented for measurement by a modified snake model. This system was validated on 31 ultrasound videos involving 31 pregnant volunteers. The differences between system performance and radiologist performance with respect to SPGS selection and length and depth (diameter) measurements are 7.5% ± 5.0%, 5.5% ± 5.2%, and 6.5% ± 4.6%, respectively. Additional validations prove that the IS precision is in the range of interobserver variability. Our system can display the SPGS along with biometric measurements in approximately three seconds after the video ends, when using a 1.9 GHz dual-core computer. IS of the GS from 2D real-time US is a practical, reproducible, and reliable approach.
A Standard Nomenclature for Referencing and Authentication of Pluripotent Stem Cells.
Kurtz, Andreas; Seltmann, Stefanie; Bairoch, Amos; Bittner, Marie-Sophie; Bruce, Kevin; Capes-Davis, Amanda; Clarke, Laura; Crook, Jeremy M; Daheron, Laurence; Dewender, Johannes; Faulconbridge, Adam; Fujibuchi, Wataru; Gutteridge, Alexander; Hei, Derek J; Kim, Yong-Ou; Kim, Jung-Hyun; Kokocinski, Anja Kolb-; Lekschas, Fritz; Lomax, Geoffrey P; Loring, Jeanne F; Ludwig, Tenneille; Mah, Nancy; Matsui, Tohru; Müller, Robert; Parkinson, Helen; Sheldon, Michael; Smith, Kelly; Stachelscheid, Harald; Stacey, Glyn; Streeter, Ian; Veiga, Anna; Xu, Ren-He
2018-01-09
Unambiguous cell line authentication is essential to avoid loss of association between data and cells. The risk for loss of references increases with the rapidity that new human pluripotent stem cell (hPSC) lines are generated, exchanged, and implemented. Ideally, a single name should be used as a generally applied reference for each cell line to access and unify cell-related information across publications, cell banks, cell registries, and databases and to ensure scientific reproducibility. We discuss the needs and requirements for such a unique identifier and implement a standard nomenclature for hPSCs, which can be automatically generated and registered by the human pluripotent stem cell registry (hPSCreg). To avoid ambiguities in PSC-line referencing, we strongly urge publishers to demand registration and use of the standard name when publishing research based on hPSC lines. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Intelligent Automatic Classification of True and Counterfeit Notes Based on Spectrum Analysis
NASA Astrophysics Data System (ADS)
Matsunaga, Shohei; Omatu, Sigeru; Kosaka, Toshohisa
The purpose of this paper is to classify bank notes into “true” or “counterfeit” ones faster and more precisely compared with a conventional method. We note that thin lines are represented by direct lines in the images of true notes while they are represented in the counterfeit notes by dotted lines. This is due to properties of dot printers or scanner levels. To use the properties, we propose two method to classify a note into true or counterfeited one by checking whether there exist thin lines or dotted lines of the note. First, we use Fourier transform of the note to find quantity of features for classification and we classify a note into true or counterfeit one by using the features by Fourier transform. Then we propose a classification method by using wavelet transform in place of Fourier transform. Finally, some classification results are illustrated to show the effectiveness of the proposed methods.
Control voltage and power fluctuations when connecting wind farms
NASA Astrophysics Data System (ADS)
Berinde, Ioan; Bǎlan, Horia; Oros Pop, Teodora Susana
2015-12-01
Voltage, frequency, active power and reactive power are very important parameters in terms of power quality. These parameters are followed when connecting any power plant, the more the connection of wind farms. Connecting wind farms to the electricity system must not cause interference outside the limits set by regulations. Modern solutions for fast and automatic voltage control and power fluctuations using electronic control systems of reactive power flows. FACTS (Flexible Alternating Current Transmision System) systems, established on the basis of power electronic circuits ensure control of electrical status quantities to achieve the necessary transfer of power to the power grid. FACTS devices can quickly control parameters and sizes of state power lines, such as impedance line voltages and phase angles of the voltages of the two ends of the line. Their use can lead to improvement in power system operation by increasing the transmission capacity of power lines, power flow control lines, improved static and transient stability reserve.
Beam/seam alignment control for electron beam welding
Burkhardt, Jr., James H.; Henry, J. James; Davenport, Clyde M.
1980-01-01
This invention relates to a dynamic beam/seam alignment control system for electron beam welds utilizing video apparatus. The system includes automatic control of workpiece illumination, near infrared illumination of the workpiece to limit the range of illumination and camera sensitivity adjustment, curve fitting of seam position data to obtain an accurate measure of beam/seam alignment, and automatic beam detection and calculation of the threshold beam level from the peak beam level of the preceding video line to locate the beam or seam edges.
USDA-ARS?s Scientific Manuscript database
A computer algorithm was created to inspect scanned images from DNA microarray slides developed to rapidly detect and genotype E. Coli O157 virulent strains. The algorithm computes centroid locations for signal and background pixels in RGB space and defines a plane perpendicular to the line connect...
Genetic Algorithm Based Multi-Agent System Applied to Test Generation
ERIC Educational Resources Information Center
Meng, Anbo; Ye, Luqing; Roy, Daniel; Padilla, Pierre
2007-01-01
Automatic test generating system in distributed computing context is one of the most important links in on-line evaluation system. Although the issue has been argued long since, there is not a perfect solution to it so far. This paper proposed an innovative approach to successfully addressing such issue by the seamless integration of genetic…
The Game Embedded CALL System to Facilitate English Vocabulary Acquisition and Pronunciation
ERIC Educational Resources Information Center
Young, Shelley Shwu-Ching; Wang, Yi-Hsuan
2014-01-01
The aim of this study is to make a new attempt to explore the potential of integrating game strategies with automatic speech recognition technologies to provide learners with individual opportunities for English pronunciation learning. The study developed the Game Embedded CALL (GeCALL) system with two activities for on-line speaking practice. For…
The ECLSS Advanced Automation Project Evolution and Technology Assessment
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, James R.; Lukefahr, Brenda D.; Rogers, John S.; Rochowiak, Daniel M.; Mckee, James W.; Benson, Brian L.
1990-01-01
Viewgraphs on Environmental Control and Life Support System (ECLSS) advanced automation project evolution and technology assessment are presented. Topics covered include: the ECLSS advanced automation project; automatic fault diagnosis of ECLSS subsystems descriptions; in-line, real-time chemical and microbial fluid analysis; and object-oriented, distributed chemical and microbial modeling of regenerative environmental control systems description.
Gifted and Maladjusted? Implicit Attitudes and Automatic Associations Related to Gifted Children
ERIC Educational Resources Information Center
Preckel, Franzis; Baudson, Tanja Gabriele; Krolak-Schwerdt, Sabine; Glock, Sabine
2015-01-01
The disharmony hypothesis (DH) states that high intelligence comes at a cost to the gifted, resulting in adjustment problems. We investigated whether there is a gifted stereotype that falls in line with the DH and affects attitudes toward gifted students. Preservice teachers (N = 182) worked on single-target association tests and affective priming…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... section 1404(b) of the Act (``Drain Cover Standard''). In addition to the anti-entrapment devices or... system; gravity drainage system; automatic pump shut-off system or drain disablement. The Pool and Spa... the drain covers, anti-entrapment device/systems, sump or equalizer lines at the site; and report on...
NASA Astrophysics Data System (ADS)
Markl, Daniel; Ziegler, Jakob; Hannesschläger, Günther; Sacher, Stephan; Buchsbaum, Andreas; Leitner, Michael; Khinast, Johannes G.
2014-05-01
Coating of tablets is a widely applied unit operation in the pharmaceutical industry. Thickness and uniformity of the coating layer are crucial for efficacy as well as for compliance. Not only due to different initiatives it is thus essential to monitor and control the coating process in-line. Optical coherence tomography (OCT) was already shown in previous works to be a suitable candidate for in-line monitoring of coating processes. However, to utilize the full potential of the OCT technology an automatic evaluation of the OCT measurements is essential. The automatic evaluation is currently implemented in MATLAB and includes several steps: (1) extraction of features of each A-scan, (2) classification of Ascan measurements based on their features, (3) detection of interfaces (air/coating and coating/tablet core), (4) correction of distortions due to the curvature of the bi-convex tablets and the oblique orientation of the tablets, and (5) determining the coating thickness. The algorithm is tested on OCT data acquired by moving the sensor head of the OCT system across a static tablet bed. The coating thickness variations of single tablets (i.e., intra-tablet coating variability) can additionally be analyzed as OCT allows the measurement of the coating thickness on multiple displaced positions on one single tablet. Specifically, the information about those parameters emphasizes the high capability of the OCT technology to improve process understanding and to assure a high product quality.
Development of an Automatic Detection Program of Halo CMEs
NASA Astrophysics Data System (ADS)
Choi, K.; Park, M. Y.; Kim, J.
2017-12-01
The front-side halo CMEs are the major cause for large geomagnetic storms. Halo CMEs can result in damage to satellites, communication, electrical transmission lines and power systems. Thus automated techniques for detecting and analysing Halo CMEs from coronagraph data are of ever increasing importance for space weather monitoring and forecasting. In this study, we developed the algorithm that can automatically detect and do image processing the Halo CMEs in the images from the LASCO C3 coronagraph on board the SOHO spacecraft. With the detection algorithm, we derived the geometric and kinematical parameters of halo CMEs, such as source location, width, actual CME speed and arrival time at 21.5 solar radii.
Improving the Quality of Welding Seam of Automatic Welding of Buckets Based on TCP
NASA Astrophysics Data System (ADS)
Hu, Min
2018-02-01
Since February 2014, the welding defects of the automatic welding line of buckets have been frequently appeared. The average repair time of each bucket is 26min, which seriously affects the production efficiency and welding quality. We conducted troubleshooting, and found the main reasons for the welding defects of the buckets were the deviations of the center points of the robot tools and the poor quality of the locating welding. We corrected the gripper, welding torch, and accuracy of repeat positioning of robots to control the quality of positioning welding. The welding defect rate of buckets was reduced greatly, ensuring the production efficiency and welding quality.
State reference design and saturated control of doubly-fed induction generators under voltage dips
NASA Astrophysics Data System (ADS)
Tilli, Andrea; Conficoni, Christian; Hashemi, Ahmad
2017-04-01
In this paper, the stator/rotor currents control problem of doubly-fed induction generator under faulty line voltage is carried out. Common grid faults cause a steep decline in the line voltage profile, commonly denoted as voltage dip. This point is critical for such kind of machines, having their stator windings directly connected to the grid. In this respect, solid methodological nonlinear control theory arguments are exploited and applied to design a novel controller, whose main goal is to improve the system behaviour during voltage dips, endowing it with low voltage ride through capability, a fundamental feature required by modern Grid Codes. The proposed solution exploits both feedforward and feedback actions. The feedforward part relies on suitable reference trajectories for the system internal dynamics, which are designed to prevent large oscillations in the rotor currents and command voltages, excited by line perturbations. The feedback part uses state measurements and is designed according to Linear Matrix Inequalities (LMI) based saturated control techniques to further reduce oscillations, while explicitly accounting for the system constraints. Numerical simulations verify the benefits of the internal dynamics trajectory planning, and the saturated state feedback action, in crucially improving the Doubly-Fed Induction Machine response under severe grid faults.
Left ventricular endocardial surface detection based on real-time 3D echocardiographic data
NASA Technical Reports Server (NTRS)
Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.
2001-01-01
OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.
Development of a Global Agricultural Hotspot Detection and Early Warning System
NASA Astrophysics Data System (ADS)
Lemoine, G.; Rembold, F.; Urbano, F.; Csak, G.
2015-12-01
The number of web based platforms for crop monitoring has grown rapidly over the last years and anomaly maps and time profiles of remote sensing derived indicators can be accessed online thanks to a number of web based portals. However, while these systems make available a large amount of crop monitoring data to the agriculture and food security analysts, there is no global platform which provides agricultural production hotspot warning in a highly automatic and timely manner. Therefore a web based system providing timely warning evidence as maps and short narratives is currently under development by the Joint Research Centre. The system (called "HotSpot Detection System of Agriculture Production Anomalies", HSDS) will focus on water limited agricultural systems worldwide. The automatic analysis of relevant meteorological and vegetation indicators at selected administrative units (Gaul 1 level) will trigger warning messages for the areas where anomalous conditions are observed. The level of warning (ranging from "watch" to "alert") will depend on the nature and number of indicators for which an anomaly is detected. Information regarding the extent of the agricultural areas concerned by the anomaly and the progress of the agricultural season will complement the warning label. In addition, we are testing supplementary detailed information from other sources for the areas triggering a warning. These regard the automatic web-based and food security-tailored analysis of media (using the JRC Media Monitor semantic search engine) and the automatic detection of active crop area using Sentinel 1, upcoming Sentinel-2 and Landsat 8 imagery processed in Google Earth Engine. The basic processing will be fully automated and updated every 10 days exploiting low resolution rainfall estimates and satellite vegetation indices. Maps, trend graphs and statistics accompanied by short narratives edited by a team of crop monitoring experts, will be made available on the website on a monthly basis.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János
2018-01-01
Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085
Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos
2018-01-01
Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)
NASA Astrophysics Data System (ADS)
Rau, J.-Y.
2012-07-01
Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential area with diverse roof type. For performance evaluation, the adopted roof structural lines are manually measured from the provided stereo-pair. Experimental results indicate a nearly 100% success rate for topology reconstruction was achieved provided that the 3D structural lines can be enclosed as polygons. On the other hand, the success rate at the Reshaping stage is dependent on the complexity of the rooftop structure. Thus, a visual inspection and semi-automatic adjustment of roof-type is suggested and implemented to complete the roof modelling. The results demonstrate that the proposed scheme is robust and reliable with a high degree of completeness, correctness, and quality, even when a group of connected buildings with multiple layers and mixed roof types is processed.
The optical frequency comb fibre spectrometer
Coluccelli, Nicola; Cassinerio, Marco; Redding, Brandon; Cao, Hui; Laporta, Paolo; Galzerano, Gianluca
2016-01-01
Optical frequency comb sources provide thousands of precise and accurate optical lines in a single device enabling the broadband and high-speed detection required in many applications. A main challenge is to parallelize the detection over the widest possible band while bringing the resolution to the single comb-line level. Here we propose a solution based on the combination of a frequency comb source and a fibre spectrometer, exploiting all-fibre technology. Our system allows for simultaneous measurement of 500 isolated comb lines over a span of 0.12 THz in a single acquisition; arbitrarily larger span are demonstrated (3,500 comb lines over 0.85 THz) by doing sequential acquisitions. The potential for precision measurements is proved by spectroscopy of acetylene at 1.53 μm. Being based on all-fibre technology, our system is inherently low-cost, lightweight and may lead to the development of a new class of broadband high-resolution spectrometers. PMID:27694981
Women and men with intellectual disabilities who sell or trade sex: voices from the professionals.
Kuosmanen, Jari; Starke, Mikaela
2011-01-01
The phenomenon of women and men with intellectual disabilities (ID) selling or exchanging sexual services is poorly understood. In this study, the authors explored the knowledge and perceptions of this phenomenon shared by professionals working in the field. Focus group discussions demonstrated broad familiarity with the phenomenon. Different motives and contributing factors were identified for the behavior, blurring the boundary line between free choice and exploitation. Two distinct discourses emerged from the interviews based on the assumed "rationality" of the sex transaction and its rewards: Those with ID who traded sexual favors were presented as either conscious and autonomous agents or unaware and exploited victims.
Development of a novel constellation based landmark detection algorithm
NASA Astrophysics Data System (ADS)
Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.
2013-03-01
Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Towards natural language question generation for the validation of ontologies and mappings.
Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos
2016-08-08
The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.
Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?
Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel
2018-01-01
According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.
Demonstration of Land and Hold Short Technology at the Dallas-Fort Worth International Airport
NASA Technical Reports Server (NTRS)
Hyer, Paul V.; Jones, Denise R. (Technical Monitor)
2002-01-01
A guidance system for assisting in Land and Hold Short operations was developed and then tested at the Dallas-Fort Worth International Airport. This system displays deceleration advisory information on a head-up display (HUD) in front of the airline pilot during landing. The display includes runway edges, a trend vector, deceleration advisory, locations of the hold line and of the selected exit, and alphanumeric information about the progress of the aircraft. Deceleration guidance is provided to the hold short line or to a pilot selected exit prior to this line. Logic is provided to switch the display automatically to the next available exit. The report includes descriptions of the algorithms utilized in the displays, and a report on the techniques of HUD alignment, and results.
NASA Astrophysics Data System (ADS)
Melendez, Jaime; Sánchez, Clara I.; Philipsen, Rick H. H. M.; Maduskar, Pragnya; Dawson, Rodney; Theron, Grant; Dheda, Keertan; van Ginneken, Bram
2016-04-01
Lack of human resources and radiological interpretation expertise impair tuberculosis (TB) screening programmes in TB-endemic countries. Computer-aided detection (CAD) constitutes a viable alternative for chest radiograph (CXR) reading. However, no automated techniques that exploit the additional clinical information typically available during screening exist. To address this issue and optimally exploit this information, a machine learning-based combination framework is introduced. We have evaluated this framework on a database containing 392 patient records from suspected TB subjects prospectively recruited in Cape Town, South Africa. Each record comprised a CAD score, automatically computed from a CXR, and 12 clinical features. Comparisons with strategies relying on either CAD scores or clinical information alone were performed. Our results indicate that the combination framework outperforms the individual strategies in terms of the area under the receiving operating characteristic curve (0.84 versus 0.78 and 0.72), specificity at 95% sensitivity (49% versus 24% and 31%) and negative predictive value (98% versus 95% and 96%). Thus, it is believed that combining CAD and clinical information to estimate the risk of active disease is a promising tool for TB screening.
VMSoar: a cognitive agent for network security
NASA Astrophysics Data System (ADS)
Benjamin, David P.; Shankar-Iyer, Ranjita; Perumal, Archana
2005-03-01
VMSoar is a cognitive network security agent designed for both network configuration and long-term security management. It performs automatic vulnerability assessments by exploring a configuration"s weaknesses and also performs network intrusion detection. VMSoar is built on the Soar cognitive architecture, and benefits from the general cognitive abilities of Soar, including learning from experience, the ability to solve a wide range of complex problems, and use of natural language to interact with humans. The approach used by VMSoar is very different from that taken by other vulnerability assessment or intrusion detection systems. VMSoar performs vulnerability assessments by using VMWare to create a virtual copy of the target machine then attacking the simulated machine with a wide assortment of exploits. VMSoar uses this same ability to perform intrusion detection. When trying to understand a sequence of network packets, VMSoar uses VMWare to make a virtual copy of the local portion of the network and then attempts to generate the observed packets on the simulated network by performing various exploits. This approach is initially slow, but VMSoar"s learning ability significantly speeds up both vulnerability assessment and intrusion detection. This paper describes the design and implementation of VMSoar, and initial experiments with Windows NT and XP.
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Semi-supervised word polarity identification in resource-lean languages.
Dehdarbehbahani, Iman; Shakery, Azadeh; Faili, Heshaam
2014-10-01
Sentiment words, as fundamental constitutive parts of subjective sentences, have a substantial effect on analysis of opinions, emotions and beliefs. Most of the proposed methods for identifying the semantic orientations of words exploit rich linguistic resources such as WordNet, subjectivity corpora, or polarity tagged words. Shortage of such linguistic resources in resource-lean languages affects the performance of word polarity identification in these languages. In this paper, we present a method which exploits a language with rich subjectivity analysis resources (English) to identify the polarity of words in a resource-lean foreign language. The English WordNet and a sparse foreign WordNet infrastructure are used to create a heterogeneous, multilingual and weighted semantic network. To identify the semantic orientation of foreign words, a random walk based method is applied to the semantic network along with a set of automatically weighted English positive and negative seeds. In a post-processing phase, synonym and antonym relations in the foreign WordNet are used to filter the random walk results. Our experiments on English and Persian languages show that the proposed method can outperform state-of-the-art word polarity identification methods in both languages. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantitative Evaluation of Performance during Robot-assisted Treatment.
Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G
2016-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
Commercial imagery archive, management, exploitation, and distribution project development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-10-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (1) a potentially unbounded, data archive (5000 TB range) (2) automated workflow management for increased user productivity; (3) automatic tracking and management of files stored on shelves; (4) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (5) access through a thin client-to-server network environment; (6) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (7) scalability that maintains information throughput performance as the size of the digital library grows.
Commercial imagery archive, management, exploitation, and distribution product development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-12-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (a) a potentially unbounded, data archive (5000 TB range) (b) automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (e) access through a thin client-to-server network environment; (f) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.
Improving Automated Lexical and Discourse Analysis of Online Chat Dialog
2007-09-01
include spelling- and grammar-checking on our word processing software; voice-recognition in our automobiles; and telephone-based conversational agents ...conversational agents can help customers make purchases on-line [3]. In addition, discourse analyzers can automatically separate multiple, interleaved...telephone-based conversational agent needs to know if it was asked a question or tasked to do something. Indeed, Stolcke et al demonstrated that
Distributed operating system for NASA ground stations
NASA Technical Reports Server (NTRS)
Doyle, John F.
1987-01-01
NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
COMP Superscalar, an interoperable programming framework
NASA Astrophysics Data System (ADS)
Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul
2015-12-01
COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.
NASA Astrophysics Data System (ADS)
Dan, Wang; Jin-Ze, Wu; Jun-Xiang, Zhang
2016-06-01
A kind of photonic crystal structure with modulation of the refractive index is investigated both experimentally and theoretically for exploiting electromagnetically induced transparency (EIT). The combination of EIT with periodically modulated refractive index medium gives rise to high efficiency reflection as well as forbidden transmission in a three-level atomic system coupled by standing wave. We show an accurate theoretical simulation via transfer-matrix theory, automatically accounting for multilayer reflections, thus fully demonstrate the existence of photonic crystal structure in atomic vapor. Project supported by the National Natural Science Foundation of China (Grant No. 11574188) and the Project for Excellent Research Team of the National Natural Science Foundation of China (Grant No. 61121064).
NASA Technical Reports Server (NTRS)
Manganaris, Stefanos; Fisher, Doug; Kulkarni, Deepak
1993-01-01
In this paper we address the problem of detecting and diagnosing faults in physical systems, for which neither prior expertise for the task nor suitable system models are available. We propose an architecture that integrates the on-line acquisition and exploitation of monitoring and diagnostic knowledge. The focus of the paper is on the component of the architecture that discovers classes of behaviors with similar characteristics by observing a system in operation. We investigate a characterization of behaviors based on best fitting approximation models. An experimental prototype has been implemented to test it. We present preliminary results in diagnosing faults of the Reaction Control System of the Space Shuttle. The merits and limitations of the approach are identified and directions for future work are set.
NASA Astrophysics Data System (ADS)
Manconi, A.; Giordan, D.
2015-07-01
We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
NASA Astrophysics Data System (ADS)
Zhang, Shaojun; Xu, Xiping
2015-10-01
The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.
NASA Astrophysics Data System (ADS)
Royer, P.; De Ridder, J.; Vandenbussche, B.; Regibo, S.; Huygen, R.; De Meester, W.; Evans, D. J.; Martinez, J.; Korte-Stapff, M.
2016-07-01
We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.
Miró, Manuel; Jimoh, Modupe; Frenzel, Wolfgang
2005-05-01
In this paper, a novel concept is presented for automatic microsampling and continuous monitoring of metal ions in soils with minimum disturbance of the sampling site. It involves a hollow-fiber microdialyser that is implanted in the soil body as a miniaturized sensing device. The idea behind microdialysis in this application is to mimic the function of a passive sampler to predict the actual, rather than potential, mobility and bioavailability of metal traces. Although almost quantitative dialysis recoveries were obtained for lead (> or = 98%) from aqueous model solutions with sufficiently long capillaries (l > or = 30 mm, 200 microm i.d.) at perfusion rates of 2.0 microL min(-1), the resistance of an inert soil matrix was found to reduce metal uptake by 30%. Preliminary investigation of the potential of the microdialysis analyser for risk assessment of soil pollution, and for metal partitioning studies, were performed by implanting the dedicated probe in a laboratory-made soil column and hyphenating it with electrothermal atomic absorption spectrometry (ETAAS), so that minute, well-defined volumes of clean microdialysates were injected on-line into the graphite furnace. A noteworthy feature of the implanted microdialysis-based device is the capability to follow the kinetics of metal release under simulated natural scenarios or anthropogenic actions. An ancillary flow set-up was arranged in such a way that a continuous flow of leaching solution--mild extractant (10(-2) mol L(-1) CaCl2), acidic solution (10(-3) mol L(-1) HNO3), or chelating agent (10(-4) or 10(-2) mol L(-1) EDTA)--was maintained through the soil body, while the concentration trends of inorganic (un-bound) metal species at the soil-liquid interface could be monitored at near real-time. Hence, relevant qualitative and quantitative information about the various mobile fractions is obtained, and metal-soil phase associations can also be elucidated. Finally, stimulus-response schemes adapted from neurochemical applications and pharmacokinetic studies are to be extended to soil research as an alternative means of local monitoring of extraction processes after induction of a chemical change in the outer boundary of the permselective dialysis membrane.
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors
NASA Astrophysics Data System (ADS)
Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.
2014-07-01
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Automatic Dependent Surveillance Broadcast: [micro]ADS-B Detect-and-Avoid Flight Tests
NASA Technical Reports Server (NTRS)
Arteaga, Ricardo; Dandachy, Mike
2018-01-01
The testing and demonstrations are necessary for both parties to further development and certification of the technology in three key areas; flights beyond line of sight, collision avoidance, and autonomous operations.
NASA Technical Reports Server (NTRS)
Pace, N.
1973-01-01
Physiological base line data are established, and physiological procedures and instrumentation necessary for the automatic measurement of hemodynamic and metabolic parameters during prolonged periods of weightlessness are developed.
Frequency tracking and variable bandwidth for line noise filtering without a reference.
Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei
2011-01-01
This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.
Think the thought, walk the walk - social priming reduces the Stroop effect.
Goldfarb, Liat; Aisenberg, Daniela; Henik, Avishai
2011-02-01
In the Stroop task, participants name the color of the ink that a color word is written in and ignore the meaning of the word. Naming the color of an incongruent color word (e.g., RED printed in blue) is slower than naming the color of a congruent color word (e.g., RED printed in red). This robust effect is known as the Stroop effect and it suggests that the intentional instruction - "do not read the word" - has limited influence on one's behavior, as word reading is being executed via an automatic path. Herein is examined the influence of a non-intentional instruction - "do not read the word" - on the Stroop effect. Social concept priming tends to trigger automatic behavior that is in line with the primed concept. Here participants were primed with the social concept "dyslexia" before performing the Stroop task. Because dyslectic people are perceived as having reading difficulties, the Stroop effect was reduced and even failed to reach significance after the dyslectic person priming. A similar effect was replicated in a further experiment, and overall it suggests that the human cognitive system has more success in decreasing the influence of another automatic process via an automatic path rather than via an intentional path. Copyright © 2010 Elsevier B.V. All rights reserved.
Automatic classification of seismic events within a regional seismograph network
NASA Astrophysics Data System (ADS)
Tiira, Timo; Kortström, Jari; Uski, Marja
2015-04-01
A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.
Simulating the Mg II NUV Spectra & C II Resonance Lines During Solar Flares
NASA Astrophysics Data System (ADS)
Kerr, Graham Stewart; Allred, Joel C.; Leenaarts, Jorrit; Butler, Elizabeth; Kowalski, Adam
2017-08-01
The solar chromosphere is the origin of the bulk of the enhanced radiative output during solar flares, and so comprehensive understanding of this region is important if we wish to understand energy transport in solar flares. It is only relatively recently, however, with the launch of IRIS that we have routine spectroscopic flarea observations of the chromsphere and transition region. Since several of the spectral lines observed by IRIS are optically thick, it is necessary to use forward modelling to extract the useful information that these lines carry about the flaring chromosphere and transition region. We present the results of modelling the formation properties Mg II resonance lines & subordinate lines, and the C II resonance lines during solar flares. We focus on understanding their relation to the physical strucutre of the flaring atmosphere, exploiting formation height differences to determine if we can extract information about gradients in the atmosphere. We show the effect of degrading the profiles to the resolution of the IRIS, and that the usual observational techniques used to identify the line centroid do a poor job in the early stages of the flare (partly due to multiple optically thick line components). Finally, we will tentatively comment on the effects that 3D radiation transfer may have on these lines.
Automated Coronal Loop Identification using Digital Image Processing Techniques
NASA Astrophysics Data System (ADS)
Lee, J. K.; Gary, G. A.; Newman, T. S.
2003-05-01
The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.
Xue, Mianqiang; Yang, Yichen; Ruan, Jujun; Xu, Zhenming
2012-01-03
The crush-pneumatic separation-corona electrostatic separation production line provides a feasible method for industrialization of waste printed circuit boards (PCBs) recycling. To determine the potential environmental contamination in the automatic line workshop, noise and heavy metals (Cr, Cu, Cd, Pb) in the ambience of the production line have been evaluated in this paper. The mean noise level in the workshop has been reduced from 96.4 to 79.3 dB since the engineering noise control measures were employed. Noise whose frequency ranged from 500 to 1000 Hz is controlled effectively. The mass concentrations of TSP and PM(10) in the workshop are 282.6 and 202.0 μg/m(3), respectively. Pb (1.40 μg/m(3)) and Cu (1.22 μg/m(3)) are the most enriched metals in TSP samples followed by Cr (0.17 μg/m(3)) and Cd (0.028 μg/m(3)). The concentrations of Cu, Pb, Cr, and Cd in PM(10) are 0.88, 0.56, 0.12, and 0.88 μg/m(3), respectively. Among the four metals, Cr and Pb are released into the ambience of the automatic line more easily in the crush and separation process. Health risk assessment shows that noncancerous effects might be possible for Pb (HI = 1.45), and noncancerous effects are unlikely for Cr, Cu, and Cd. The carcinogenic risks for Cr and Cd are 3.29 × 10(-8) and 1.61 × 10(-9), respectively. It indicates that carcinogenic risks on workers are relatively light in the workshop. These findings suggest that this technology is advanced from the perspective of environmental protection in the waste PCBs recycling industry.
Automatic corpus callosum segmentation for standardized MR brain scanning
NASA Astrophysics Data System (ADS)
Xu, Qing; Chen, Hong; Zhang, Li; Novak, Carol L.
2007-03-01
Magnetic Resonance (MR) brain scanning is often planned manually with the goal of aligning the imaging plane with key anatomic landmarks. The planning is time-consuming and subject to inter- and intra- operator variability. An automatic and standardized planning of brain scans is highly useful for clinical applications, and for maximum utility should work on patients of all ages. In this study, we propose a method for fully automatic planning that utilizes the landmarks from two orthogonal images to define the geometry of the third scanning plane. The corpus callosum (CC) is segmented in sagittal images by an active shape model (ASM), and the result is further improved by weighting the boundary movement with confidence scores and incorporating region based refinement. Based on the extracted contour of the CC, several important landmarks are located and then combined with landmarks from the coronal or transverse plane to define the geometry of the third plane. Our automatic method is tested on 54 MR images from 24 patients and 3 healthy volunteers, with ages ranging from 4 months to 70 years old. The average accuracy with respect to two manually labeled points on the CC is 3.54 mm and 4.19 mm, and differed by an average of 2.48 degrees from the orientation of the line connecting them, demonstrating that our method is sufficiently accurate for clinical use.
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Certified In-lined Reference Monitoring on .NET
2006-06-01
Introduction Language -based approaches to computer security have employed two major strategies for enforcing security policies over untrusted programs. • Low...automatically verify IRM’s using a static type-checker. Mobile (MOnitorable BIL with Effects) is an exten- sion of BIL (Baby Intermediate Language ) [15], a...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Proceedings of the 2006 Programming Languages and
ERIC Educational Resources Information Center
Graf, Edith Aurora
2014-01-01
In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…
NASA Technical Reports Server (NTRS)
Wolf, S. W. D.; Goodyer, M. J.
1982-01-01
Operation of the Transonic Self-Streamlining Wind Tunnel (TSWT) involved on-line data acquisition with automatic wall adjustment. A tunnel run consisted of streamlining the walls from known starting contours in iterative steps and acquiring model data. Each run performs what is described as a streamlining cycle. The associated software is presented.
Novel 3-D free-form surface profilometry for reverse engineering
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Huang, Zhi-Xue
2005-01-01
This article proposes an innovative 3-D surface contouring approach for automatic and accurate free-form surface reconstruction using a sensor integration concept. The study addresses a critical problem in accurate measurement of free-form surfaces by developing an automatic reconstruction approach. Unacceptable measuring accuracy issues are mainly due to the errors arising from the use of inadequate measuring strategies, ending up with inaccurate digitised data and costly post-data processing in Reverse Engineering (RE). This article is thus aimed to develop automatic digitising strategies for ensuring surface reconstruction efficiency, as well as accuracy. The developed approach consists of two main stages, namely the rapid shape identification (RSI) and the automated laser scanning (ALS) for completing 3-D surface profilometry. This developed approach effectively utilises the advantages of on-line geometric information to evaluate the degree of satisfaction of user-defined digitising accuracy under a triangular topological patch. An industrial case study was used to attest the feasibility of the approach.
Rules of engagement: incomplete and complete pronoun resolution.
Love, Jessica; McKoon, Gail
2011-07-01
Research on shallow processing suggests that readers sometimes encode only a superficial representation of a text and fail to make use of all available information. Greene, McKoon, and Ratcliff (1992) extended this work to pronouns, finding evidence that readers sometimes fail to automatically identify referents even when these are unambiguous. In this paper we revisit those findings. In 11 recognition probe, priming, and self-report experiments, we manipulated Greene et al.'s stories to discover under what circumstances a pronoun's referent is automatically understood. We lengthened the stories from 4 to 8 lines. This simple manipulation led to automatic and correct resolution, which we attribute to readers' increased engagement with the stories. We found evidence of resolution even when the additional text did not mention the pronoun's referent. In addition, our results suggest that the pronoun temporarily boosts the referent's accessibility, an advantage that disappears by the end of the next sentence. Finally, we present evidence from memory experiments that supports complete pronoun resolution for the longer but not the shorter stories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D.S.; Seong, P.H.
1995-08-01
In this paper, an improved algorithm for automatic test pattern generation (ATG) for nuclear power plant digital electronic circuits--the combinational type of logic circuits is presented. For accelerating and improving the ATG process for combinational circuits the presented ATG algorithm has the new concept--the degree of freedom (DF). The DF, directly computed from the system descriptions such as types of gates and their interconnections, is the criterion to decide which among several alternate lines` logic values required along each path promises to be the most effective in order to accelerate and improve the ATG process. Based on the DF themore » proposed ATG algorithm is implemented in the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, it is shown that the AFDS using the ATG algorithm makes Universal Card (UV Card) testing much faster than the present testing practice or by using exhaustive testing sets.« less
Use of artificial intelligence in the production of high quality minced meat
NASA Astrophysics Data System (ADS)
Kapovsky, B. R.; Pchelkina, V. A.; Plyasheshnik, P. I.; Dydykin, A. S.; Lazarev, A. A.
2017-09-01
A design for an automatic line for minced meat production according to new production technology based on an innovative meat milling method is proposed. This method allows the necessary degree of raw material comminution at the stage of raw material preparation to be obtained, which leads to production intensification due to the traditional meat mass comminution equipment being unnecessary. To ensure consistent quality of the product obtained, the use of on-line automatic control of the technological process for minced meat production is envisaged. This system has been developed using artificial intelligence methods and technologies. The system is trainable during the operation process, adapts to changes in processed raw material characteristics and to external impacts that affect the system operation, and manufactures meat shavings with minimal dispersion of the typical particle size. The control system includes equipment for express analysis of the chemical composition of the minced meat and its temperature after comminution. In this case, the minced meat production process can be controlled strictly as a function of time, which excludes subjective factors for assessing the degree of finished product readiness. This will allow finished meat products with consistent, targeted high quality to be produced.
Water quality monitor. [spacecraft potable water
NASA Technical Reports Server (NTRS)
West, S.; Crisos, J.; Baxter, W.
1979-01-01
The preprototype water quality monitor (WQM) subsystem was designed based on a breadboard monitor for pH, specific conductance, and total organic carbon (TOC). The breadboard equipment demonstrated the feasibility of continuous on-line analysis of potable water for a spacecraft. The WQM subsystem incorporated these breadboard features and, in addition, measures ammonia and includes a failure detection system. The sample, reagent, and standard solutions are delivered to the WQM sensing manifold where chemical operations and measurements are performed using flow through sensors for conductance, pH, TOC, and NH3. Fault monitoring flow detection is also accomplished in this manifold assembly. The WQM is designed to operate automatically using a hardwired electronic controller. In addition, automatic shutdown is incorporated which is keyed to four flow sensors strategically located within the fluid system.
Long-term quality assurance of [(18)F]-fluorodeoxyglucose (FDG) manufacturing.
Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter
2016-01-01
Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of (18)F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade "A" isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade "A" isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [(18)F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems.
Long-term quality assurance of [18F]-fluorodeoxyglucose (FDG) manufacturing
Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter
2016-01-01
Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of 18F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade “A” isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade “A” isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [18F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems. PMID:27508102
Latest developments in on- and off-line inspection of bank notes during production
NASA Astrophysics Data System (ADS)
Brown, Stephen C.
2004-06-01
The inspection of bank notes is a highly labour intensive process where traditionally every note on every sheet is inspected manually. However with the advent of more and more sophisticated security features, both visible and invisible, and the requirement of cost reduction in the printing process, it is clear that automation is required. Machines for the automatic inspection of bank notes have been on the market for the past 10 to 12 years, but recent developments in technology have enabled a new generation of detectors and machines to be developed. This paper focuses on the latest developments in both the off-line and on-line inspection of bank notes covering not only the visible spectrum but also a new range of detectors for inspection some of the more common invisible features used as covert features in today's bank notes.
Data transmission system with distributed microprocessors
Nambu, Shigeo
1985-01-01
A data transmission system having a common request line and a special request line in addition to a transmission line. The special request line has priority over the common request line. A plurality of node stations are multi-drop connected to the transmission line. Among the node stations, a supervising station is connected to the special request line and takes precedence over other slave stations to become a master station. The master station collects data from the slave stations. The station connected to the common request line can assign a master control function to any station requesting to be assigned the master control function within a short period of time. Each station has an auto response control circuit. The master station automatically collects data by the auto response controlling circuit independently of the microprocessors of the slave stations.
System for line drawings interpretation
NASA Astrophysics Data System (ADS)
Boatto, L.; Consorti, Vincenzo; Del Buono, Monica; Eramo, Vincenzo; Esposito, Alessandra; Melcarne, F.; Meucci, Mario; Mosciatti, M.; Tucci, M.; Morelli, Arturo
1992-08-01
This paper describes an automatic system that extracts information from line drawings, in order to feed CAD or GIS systems. The line drawings that we analyze contain interconnected thin lines, dashed lines, text, and symbols. Characters and symbols may overlap with lines. Our approach is based on the properties of the run representation of a binary image that allow giving the image a graph structure. Using this graph structure, several algorithms have been designed to identify, directly in the raster image, straight segments, dashed lines, text, symbols, hatching lines, etc. Straight segments and dashed lines are converted into vectors, with high accuracy and good noise immunity. Characters and symbols are recognized by means of a recognizer, specifically developed for this application, designed to be insensitive to rotation and scaling. Subsequent processing steps include an `intelligent'' search through the graph in order to detect closed polygons, dashed lines, text strings, and other higher-level logical entities, followed by the identification of relationships (adjacency, inclusion, etc.) between them. Relationships are further translated into a formal description of the drawing. The output of the system can be used as input to a Geographic Information System package. The system is currently used by the Italian Land Register Authority to process cadastral maps.
Automatic 2D-to-3D image conversion using 3D examples from the internet
NASA Astrophysics Data System (ADS)
Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.
2012-03-01
The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.
Real-time gas sensing based on optical feedback in a terahertz quantum-cascade laser.
Hagelschuer, Till; Wienold, Martin; Richter, Heiko; Schrottke, Lutz; Grahn, Holger T; Hübers, Heinz-Wilhelm
2017-11-27
We report on real-time gas sensing with a terahertz quantum-cascade laser (QCL). The method is solely based on the modulation of the external cavity length, exploiting the intermediate optical feedback regime. While the QCL is operated in continuous-wave mode, optical feedback results in a change of the QCL frequency as well as its terminal voltage. The first effect is exploited to tune the lasing frequency across a molecular absorption line. The second effect is used for the detection of the self-mixing signal. This allows for fast measurement times on the order of 10 ms per spectrum and for real-time measurements of gas concentrations with a rate of 100 Hz. This technique is demonstrated with a mixture of D 2 O and CH 3 OD in an absorption cell.
On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1996-01-01
A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.
Vaz, Belén; Salgueiriño, Verónica; Pérez-Lorenzo, Moisés; Correa-Duarte, Miguel A
2015-08-18
Hollow inorganic nanostructures have attracted much interest in the last few years due to their many applications in different areas of science and technology. In this Feature Article, we overview part of our current work concerning the collective use of plasmonic and magnetic nanoparticles located in voided nanostructures and explore the more specific operational issues that should be taken into account in the design of inorganic nanocapsules. Along these lines, we focus our attention on the applications of silica-based submicrometer capsules aiming to stress the importance of creating nanocavities in order to further exploit the great potential of these functional nanomaterials. Additionally, we will examine some of the recent research on this topic and try to establish a perspective for future developments in this area.
An automatic gore panel mapping system
NASA Technical Reports Server (NTRS)
Shiver, John D.; Phelps, Norman N.
1990-01-01
The Automatic Gore Mapping System is being developed to reduce the time and labor costs associated with manufacturing the External Tank. The present chem-milling processes and procedures are discussed. The down loading of the simulation of the system has to be performed to verify that the simulation package will translate the simulation code into robot code. Also a simulation of this system has to be programmed for a gantry robot instead of the articulating robot that is presently in the system. It was discovered using the simulation package that the articulation robot cannot reach all the point on some of the panels, therefore when the system is ready for production, a gantry robot will be used. Also a hydrosensor system is being developed to replace the point-to-point contact probe. The hydrosensor will allow the robot to perform a non-contact continuous scan of the panel. It will also provide a faster scan of the panel because it will eliminate the in-and-out movement required for the present end effector. The system software is currently being modified so that the hydrosensor will work with the system. The hydrosensor consists of a Krautkramer-Branson transducer encased in a plexiglass nozzle. The water stream pumped through the nozzle is the couplant for the probe. Also, software is being written so that the robot will have the ability to draw the contour lines on the panel displaying the out-of-tolerance regions. Presently the contour lines can only be displayed on the computer screens. Research is also being performed on improving and automating the method of scribing the panels. Presently the panels are manually scribed with a sharp knife. The use of a low power laser or water jet is being studied as a method of scribing the panels. The contour drawing pen will be replaced with scribing tool and the robot will then move along the contour lines. With these developments the Automatic Gore Mapping Systems will provide a reduction in time and labor costs associated with manufacturing the External Task. The system also has the potential of inspecting other manufactured parts.
Niioka, Hirohiko; Asatani, Satoshi; Yoshimura, Aina; Ohigashi, Hironori; Tagawa, Seiichi; Miyake, Jun
2018-01-01
In the field of regenerative medicine, tremendous numbers of cells are necessary for tissue/organ regeneration. Today automatic cell-culturing system has been developed. The next step is constructing a non-invasive method to monitor the conditions of cells automatically. As an image analysis method, convolutional neural network (CNN), one of the deep learning method, is approaching human recognition level. We constructed and applied the CNN algorithm for automatic cellular differentiation recognition of myogenic C2C12 cell line. Phase-contrast images of cultured C2C12 are prepared as input dataset. In differentiation process from myoblasts to myotubes, cellular morphology changes from round shape to elongated tubular shape due to fusion of the cells. CNN abstract the features of the shape of the cells and classify the cells depending on the culturing days from when differentiation is induced. Changes in cellular shape depending on the number of days of culture (Day 0, Day 3, Day 6) are classified with 91.3% accuracy. Image analysis with CNN has a potential to realize regenerative medicine industry.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia
NASA Astrophysics Data System (ADS)
Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin
2013-10-01
This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
Automatic anterior chamber angle assessment for HD-OCT images.
Tian, Jing; Marziliano, Pina; Baskaran, Mani; Wong, Hong-Tym; Aung, Tin
2011-11-01
Angle-closure glaucoma is a major blinding eye disease and could be detected by measuring the anterior chamber angle in the human eyes. High-definition OCT (Cirrus HD-OCT) is an emerging noninvasive, high-speed, and high-resolution imaging modality for the anterior segment of the eye. Here, we propose a novel algorithm which automatically detects a new landmark, Schwalbe's line, and measures the anterior chamber angle in the HD-OCT images. The distortion caused by refraction is corrected by dewarping the HD-OCT images, and three biometric measurements are defined to quantitatively assess the anterior chamber angle. The proposed algorithm was tested on 40 HD-OCT images of the eye and provided accurate measurements in about 1 second.
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng
2018-03-01
In this paper, the semi-active RFID watt-hour meter is applied to automatic test lines and intelligent warehouse management, from the transmission system, test system and auxiliary system, monitoring system, realize the scheduling of watt-hour meter, binding, control and data exchange, and other functions, make its more accurate positioning, high efficiency of management, update the data quickly, all the information at a glance. Effectively improve the quality, efficiency and automation of verification, and realize more efficient data management and warehouse management.
Jiménez-Ortega, Laura; García-Milla, Marcos; Fondevila, Sabela; Casado, Pilar; Hernández-Gutiérrez, David; Martín-Loeches, Manuel
2014-12-01
Models of language comprehension assume that syntactic processing is automatic, at least at early stages. However, the degree of automaticity of syntactic processing is still controversial. Evidence of automaticity is either indirect or has been observed for pairs of words, which might provide a poor syntactic context in comparison to sentences. The present study investigates the automaticity of syntactic processing using event-related brain potentials (ERPs) during sentence processing. To this end, masked adjectives that could either be syntactically correct or incorrect relative to a sentence being processed appeared just prior to the presentation of supraliminal adjectives. The latter could also be correct or incorrect. According to our data, subliminal gender agreement violations embedded in a sentence trigger an early anterior negativity-like modulation, whereas supraliminal gender agreement violations elicited a later anterior negativity. First-pass syntactic parsing thus appears to be unconsciously and automatically elicited. Interestingly, a P600-like modulation of short duration and early latency could also be observed for masked violations. In addition, masked violations also modulated the P600 component elicited by unmasked targets, probably reflecting that the mechanisms of revising a structural mismatch appear affected by subliminal information. According to our findings, both conscious and unconscious processes apparently contribute to syntactic processing. These results are discussed in line with most recent theories of automaticity and syntactic processing. Copyright © 2014 Elsevier B.V. All rights reserved.
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Noel, Yves; D'arco, Philippe; Demichelis, Raffaella; Zicovich-Wilson, Claudio M; Dovesi, Roberto
2010-03-01
Nanotubes can be characterized by a very high point symmetry, comparable or even larger than the one of the most symmetric crystalline systems (cubic, 48 point symmetry operators). For example, N = 2n rototranslation symmetry operators connect the atoms of the (n,0) nanotubes. This symmetry is fully exploited in the CRYSTAL code. As a result, ab initio quantum mechanical large basis set calculations of carbon nanotubes containing more than 150 atoms in the unit cell become very cheap, because the irreducible part of the unit cell reduces to two atoms only. The nanotube symmetry is exploited at three levels in the present implementation. First, for the automatic generation of the nanotube structure (and then of the input file for the SCF calculation) starting from a two-dimensional structure (in the specific case, graphene). Second, the nanotube symmetry is used for the calculation of the mono- and bi-electronic integrals that enter into the Fock (Kohn-Sham) matrix definition. Only the irreducible wedge of the Fock matrix is computed, with a saving factor close to N. Finally, the symmetry is exploited for the diagonalization, where each irreducible representation is separately treated. When M atomic orbitals per carbon atom are used, the diagonalization computing time is close to Nt, where t is the time required for the diagonalization of each 2M x 2M matrix. The efficiency and accuracy of the computational scheme is documented. (c) 2009 Wiley Periodicals, Inc.
ATR performance modeling concepts
NASA Astrophysics Data System (ADS)
Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.
2016-05-01
Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.